Chaos Engineering — How to Break AWS Infrastructure on Purpose

> 1. What is Chaos Engineering and the importance of it.
Image for post

Chaos Engineering is a type of Engineering where we test the system’s robustness, reliability and the ability to survive a disaster without manual intervention.

It is a process where we manually disrupt our Infrastructure productively and test how quickly and efficiently our Applications and Infra Autoheal themselves and their ability to thrive during a disaster or any System Catastrophe.

Sounds interesting, huh?

Well, it is very interesting because we would be experimenting, playing and disrupting our Infra and keenly observe how it reacts, learn and improve from it. This makes our Infra robust, stable and exhibit more confidence on our production stacks (which, I think is very important).

We will be knowing the weakness and the leaks in our system and help us overcome the issues beforehand in our Test Environment.

There are many Chaos experiments we can perform on our system like deleting a random EC2 Instance, deleting Services and etc which we shall explore in the last section.

> 2.Addressing Prerequisites — Setup your AWS Account and CLI on your Terminal
Let’s get our hands dirty by setting up our Infra ready to disrupt.

Prerequisites:

  1. Get the Access Key ID and Secret Access Key from AWS Account
  2. Install AWS CLI on your local machine
  3. Configure AWS credentials for the AWS Account on your machine
  4. Setup Infra — Create an Auto Scaling Group and attach 3 EC2 Instances to it as desired and Min Capacity (Assume Tasks/Services are running inside it).
  5. Validate AWS CLI by checking the number of Instances against the newly created AS

Get the Access Key ID and Secret Access Key from AWS Account

Go to https://aws.amazon.com/console/ and login to the AWS Console. Navigate to IAM section->Dashboard->Manage Security Credentials → AccessKeys Tab and extract your Access Key ID and Secret Access Key.

Go ahead and Create on if you don’t have one.

Image for post

AWS Access Keys (Masked for Security)

Install AWS CLI on your local machine

After jotting down the keys, let’s install AWS CLI v2 on your system. If you already have this configured, please proceed to Step 3 where we create the AWS Infra.

Install AWS CLI by following the commands mentioned in the AWS documentation.

Installing the AWS CLI version 2 on macOS

This topic describes how to install, update, and remove the AWS CLI version 2 on macOS. AWS CLI versions 1 and 2 use…

docs.aws.amazon.com

After installing AWS CLI, go to your mac Terminal and type in aws and that should list something like the image below. This confirms and validates that AWS CLI has been successfully configured.

Image for post

AWS CLI Validation

Configure AWS credentials for the AWS Account on your machine

Now, time to map your AWS Credentials on your local machine. We need to configure the Access Key ID and Secret Access Key on your machine so that you can connect to yourAWS Account from your machine and create and disrupt the Infra using AWS CLI.

aws configure should do the trick and ask for the Credentials, region and the output format. You might want to configure it as the image below.

Image for post

We can validate this by going to your ~/.aws/credentials

This file validates the Credentials we have just added in the terminal and displays the keys. With this step finished, we now have access to the AWS Account from our machine through AWS CLI. Eureka…!!!

Setup Infra — Create an Auto Scaling Group and attach 3 EC2 Instances to it as desired and Min Capacity (Assume Tasks/Services are running inside it).

We will be using the AWS CLI to create a Chaos Experiment and disrupt the Instances. For the time being we shall create an Auto Scaling Group and attach 3 EC2 Instances using the AWS Console.

Go straight to AWS Console and search for EC2 and go to the tab of “Auto Scaling Groups” and Create a new Auto Scaling Group.

a. Select the Appropriate Instance type (preferably a t2.micro -free tier)

b. Create a new Launch Configuration and associate an IAM role if you have one.

c. Create the ASG with a minimum of 3 EC2 Instances and a max of 6 Instances and add it in the required VPC and Subnets. Defaults are sufficient for this sample Experiment.

Image for post

Image for post

Validate AWS CLI by checking the number of Instances against the newly created ASG.

New ASG gets created and 3 new EC2 Instances gets automatically launched and come to a steady state. We have established the Infra. For this Experiment, we can assume that this is how our backend Infrastructure is setup and now we shall start disrupting. We can discuss more disruption techniques in the last section.

Image for post

#chaos-testing #chaos-monkey #disruption #aws #chaos-engineering

What is GEEK

Buddha Community

Chaos Engineering — How to Break AWS Infrastructure on Purpose

Chaos Engineering — How to Break AWS Infrastructure on Purpose

> 1. What is Chaos Engineering and the importance of it.
Image for post

Chaos Engineering is a type of Engineering where we test the system’s robustness, reliability and the ability to survive a disaster without manual intervention.

It is a process where we manually disrupt our Infrastructure productively and test how quickly and efficiently our Applications and Infra Autoheal themselves and their ability to thrive during a disaster or any System Catastrophe.

Sounds interesting, huh?

Well, it is very interesting because we would be experimenting, playing and disrupting our Infra and keenly observe how it reacts, learn and improve from it. This makes our Infra robust, stable and exhibit more confidence on our production stacks (which, I think is very important).

We will be knowing the weakness and the leaks in our system and help us overcome the issues beforehand in our Test Environment.

There are many Chaos experiments we can perform on our system like deleting a random EC2 Instance, deleting Services and etc which we shall explore in the last section.

> 2.Addressing Prerequisites — Setup your AWS Account and CLI on your Terminal
Let’s get our hands dirty by setting up our Infra ready to disrupt.

Prerequisites:

  1. Get the Access Key ID and Secret Access Key from AWS Account
  2. Install AWS CLI on your local machine
  3. Configure AWS credentials for the AWS Account on your machine
  4. Setup Infra — Create an Auto Scaling Group and attach 3 EC2 Instances to it as desired and Min Capacity (Assume Tasks/Services are running inside it).
  5. Validate AWS CLI by checking the number of Instances against the newly created AS

Get the Access Key ID and Secret Access Key from AWS Account

Go to https://aws.amazon.com/console/ and login to the AWS Console. Navigate to IAM section->Dashboard->Manage Security Credentials → AccessKeys Tab and extract your Access Key ID and Secret Access Key.

Go ahead and Create on if you don’t have one.

Image for post

AWS Access Keys (Masked for Security)

Install AWS CLI on your local machine

After jotting down the keys, let’s install AWS CLI v2 on your system. If you already have this configured, please proceed to Step 3 where we create the AWS Infra.

Install AWS CLI by following the commands mentioned in the AWS documentation.

Installing the AWS CLI version 2 on macOS

This topic describes how to install, update, and remove the AWS CLI version 2 on macOS. AWS CLI versions 1 and 2 use…

docs.aws.amazon.com

After installing AWS CLI, go to your mac Terminal and type in aws and that should list something like the image below. This confirms and validates that AWS CLI has been successfully configured.

Image for post

AWS CLI Validation

Configure AWS credentials for the AWS Account on your machine

Now, time to map your AWS Credentials on your local machine. We need to configure the Access Key ID and Secret Access Key on your machine so that you can connect to yourAWS Account from your machine and create and disrupt the Infra using AWS CLI.

aws configure should do the trick and ask for the Credentials, region and the output format. You might want to configure it as the image below.

Image for post

We can validate this by going to your ~/.aws/credentials

This file validates the Credentials we have just added in the terminal and displays the keys. With this step finished, we now have access to the AWS Account from our machine through AWS CLI. Eureka…!!!

Setup Infra — Create an Auto Scaling Group and attach 3 EC2 Instances to it as desired and Min Capacity (Assume Tasks/Services are running inside it).

We will be using the AWS CLI to create a Chaos Experiment and disrupt the Instances. For the time being we shall create an Auto Scaling Group and attach 3 EC2 Instances using the AWS Console.

Go straight to AWS Console and search for EC2 and go to the tab of “Auto Scaling Groups” and Create a new Auto Scaling Group.

a. Select the Appropriate Instance type (preferably a t2.micro -free tier)

b. Create a new Launch Configuration and associate an IAM role if you have one.

c. Create the ASG with a minimum of 3 EC2 Instances and a max of 6 Instances and add it in the required VPC and Subnets. Defaults are sufficient for this sample Experiment.

Image for post

Image for post

Validate AWS CLI by checking the number of Instances against the newly created ASG.

New ASG gets created and 3 new EC2 Instances gets automatically launched and come to a steady state. We have established the Infra. For this Experiment, we can assume that this is how our backend Infrastructure is setup and now we shall start disrupting. We can discuss more disruption techniques in the last section.

Image for post

#chaos-testing #chaos-monkey #disruption #aws #chaos-engineering

The Principles of Chaos Engineering

Resilience is something those who use Kubernetes to run apps and microservices in containers aim for. When a system is resilient, it can handle losing a portion of its microservices and components without the entire system becoming inaccessible.

Resilience is achieved by integrating loosely coupled microservices. When a system is resilient, microservices can be updated or taken down without having to bring the entire system down. Scaling becomes easier too, since you don’t have to scale the whole cloud environment at once.

That said, resilience is not without its challenges. Building microservices that are independent yet work well together is not easy.

What Is Chaos Engineering?

Chaos Engineering has been around for almost a decade now but it is still a relevent and useful concept to incorporate into improving your whole systems architecture. In essence, Chaos Engineering is the process of triggering and injecting faults into a system deliberately. Instead of waiting for errors to occur, engineers can take deliberate steps to cause (or simulate) errors in a controlled environment.

Chaos Engineering allows for better, more advanced resilience testing. Developers can now experiment in cloud-native distributed systems. Experiments involve testing both the physical infrastructure and the cloud ecosystem.

Chaos Engineering is not a new approach. In fact, companies like Netflix have been using resilience testing through Chaos Monkey, an in-house Chaos Engineering framework designed to improve the strength of cloud infrastructure for years now.

When dealing with a large-scale distributed system, Chaos Engineering provides an empirical way of building confidence by anticipating faults instead of reacting to them. The chaotic condition is triggered intentionally for this purpose.

There are a lot of analogies depicting how Chaos Engineering works, but the traffic light analogy represents the concept best. Conventional testing is similar to testing traffic lights individually to make sure that they work.

Chaos Engineering, on the other hand, means closing out a busy array of intersections to see how traffic reacts to the chaos of losing traffic lights. Since the test is run deliberately, more insights can be collected from the process.

#devops #chaos engineering #chaos monkey #chaos #chaos testing

Seamus  Quitzon

Seamus Quitzon

1601341562

AWS Cost Allocation Tags and Cost Reduction

Bob had just arrived in the office for his first day of work as the newly hired chief technical officer when he was called into a conference room by the president, Martha, who immediately introduced him to the head of accounting, Amanda. They exchanged pleasantries, and then Martha got right down to business:

“Bob, we have several teams here developing software applications on Amazon and our bill is very high. We think it’s unnecessarily high, and we’d like you to look into it and bring it under control.”

Martha placed a screenshot of the Amazon Web Services (AWS) billing report on the table and pointed to it.

“This is a problem for us: We don’t know what we’re spending this money on, and we need to see more detail.”

Amanda chimed in, “Bob, look, we have financial dimensions that we use for reporting purposes, and I can provide you with some guidance regarding some information we’d really like to see such that the reports that are ultimately produced mirror these dimensions — if you can do this, it would really help us internally.”

“Bob, we can’t stress how important this is right now. These projects are becoming very expensive for our business,” Martha reiterated.

“How many projects do we have?” Bob inquired.

“We have four projects in total: two in the aviation division and two in the energy division. If it matters, the aviation division has 75 developers and the energy division has 25 developers,” the CEO responded.

Bob understood the problem and responded, “I’ll see what I can do and have some ideas. I might not be able to give you retrospective insight, but going forward, we should be able to get a better idea of what’s going on and start to bring the cost down.”

The meeting ended with Bob heading to find his desk. Cost allocation tags should help us, he thought to himself as he looked for someone who might know where his office is.

#aws #aws cloud #node js #cost optimization #aws cli #well architected framework #aws cost report #cost control #aws cost #aws tags

Hire AWS Developer

Looking to Hire Professional AWS Developers?

The technology inventions have demanded all businesses to use and manage cloud-based computing services and Amazon is dominating the cloud computing services provider in the world.

Hire AWS Developer from HourlyDeveloper.io & Get the best amazon web services development. Take your business to excellence with our best AWS developer that will serve you the benefit of different cloud computing tools.

Consult with experts: https://bit.ly/2CWJgHyAWS Development services

#hire aws developer #aws developers #aws development company #aws development services #aws development #aws

Christa  Stehr

Christa Stehr

1598408880

How To Unite AWS KMS with Serverless Application Model (SAM)

The Basics

AWS KMS is a Key Management Service that let you create Cryptographic keys that you can use to encrypt and decrypt data and also other keys. You can read more about it here.

Important points about Keys

Please note that the customer master keys(CMK) generated can only be used to encrypt small amount of data like passwords, RSA key. You can use AWS KMS CMKs to generate, encrypt, and decrypt data keys. However, AWS KMS does not store, manage, or track your data keys, or perform cryptographic operations with data keys.

You must use and manage data keys outside of AWS KMS. KMS API uses AWS KMS CMK in the encryption operations and they cannot accept more than 4 KB (4096 bytes) of data. To encrypt application data, use the server-side encryption features of an AWS service, or a client-side encryption library, such as the AWS Encryption SDK or the Amazon S3 encryption client.

Scenario

We want to create signup and login forms for a website.

Passwords should be encrypted and stored in DynamoDB database.

What do we need?

  1. KMS key to encrypt and decrypt data
  2. DynamoDB table to store password.
  3. Lambda functions & APIs to process Login and Sign up forms.
  4. Sign up/ Login forms in HTML.

Lets Implement it as Serverless Application Model (SAM)!

Lets first create the Key that we will use to encrypt and decrypt password.

KmsKey:
    Type: AWS::KMS::Key
    Properties: 
      Description: CMK for encrypting and decrypting
      KeyPolicy:
        Version: '2012-10-17'
        Id: key-default-1
        Statement:
        - Sid: Enable IAM User Permissions
          Effect: Allow
          Principal:
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
          Action: kms:*
          Resource: '*'
        - Sid: Allow administration of the key
          Effect: Allow
          Principal:
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:user/${KeyAdmin}
          Action:
          - kms:Create*
          - kms:Describe*
          - kms:Enable*
          - kms:List*
          - kms:Put*
          - kms:Update*
          - kms:Revoke*
          - kms:Disable*
          - kms:Get*
          - kms:Delete*
          - kms:ScheduleKeyDeletion
          - kms:CancelKeyDeletion
          Resource: '*'
        - Sid: Allow use of the key
          Effect: Allow
          Principal:
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:user/${KeyUser}
          Action:
          - kms:DescribeKey
          - kms:Encrypt
          - kms:Decrypt
          - kms:ReEncrypt*
          - kms:GenerateDataKey
          - kms:GenerateDataKeyWithoutPlaintext
          Resource: '*'

The important thing in above snippet is the KeyPolicy. KMS requires a Key Administrator and Key User. As a best practice your Key Administrator and Key User should be 2 separate user in your Organisation. We are allowing all permissions to the root users.

So if your key Administrator leaves the organisation, the root user will be able to delete this key. As you can see **KeyAdmin **can manage the key but not use it and KeyUser can only use the key. ${KeyAdmin} and **${KeyUser} **are parameters in the SAM template.

You would be asked to provide values for these parameters during SAM Deploy.

#aws #serverless #aws-sam #aws-key-management-service #aws-certification #aws-api-gateway #tutorial-for-beginners #aws-blogs