1598013180
If you create an EIP in your terraform modules, it will be destroyed when you destroy terraform modules.
To make EIP persistent, I usually create an AWS eip terraform module with S3 remote state support and I read it from another module.
Here is the basic example.
PS I will create an terraform module when I have more time
I created multiple files to maintain easily.
This file contains a terraform backend configuration. if you don’t have it you can create it by using
$ cat config.tf
terraform {
backend "s3" {
bucket = "my-tf-remote-state"
key = "eip/terraform.tfstate"
region = "us-east-2"
encrypt = true
kms_key_id = "6e0b950f-1bce-49cd-xyz"
}
}
This file creates an Elastic IP resource called elasticsearch
You can choose any name, I used this EIP in my elastic search instance.
provider "aws" {
region = var.region
}
resource "aws_eip" "elasticsearch" {
vpc = true
tags = {
Name = "${var.namespace}-${var.stage}-${var.name}-elasticsearch-eip"
}
}
variable "region" {
}
variable "namespace" {
type = string
description = "Namespace, which could be your organization name, e.g. 'eg' or 'cp'"
}
variable "stage" {
type = string
description = "Stage, e.g. 'prod', 'staging', 'dev' or 'testing'"
}
variable "name" {
type = string
description = "Solution name, e.g. 'app' or 'cluster'"
}
#remote-state #aws #eip #terraform
1598013180
If you create an EIP in your terraform modules, it will be destroyed when you destroy terraform modules.
To make EIP persistent, I usually create an AWS eip terraform module with S3 remote state support and I read it from another module.
Here is the basic example.
PS I will create an terraform module when I have more time
I created multiple files to maintain easily.
This file contains a terraform backend configuration. if you don’t have it you can create it by using
$ cat config.tf
terraform {
backend "s3" {
bucket = "my-tf-remote-state"
key = "eip/terraform.tfstate"
region = "us-east-2"
encrypt = true
kms_key_id = "6e0b950f-1bce-49cd-xyz"
}
}
This file creates an Elastic IP resource called elasticsearch
You can choose any name, I used this EIP in my elastic search instance.
provider "aws" {
region = var.region
}
resource "aws_eip" "elasticsearch" {
vpc = true
tags = {
Name = "${var.namespace}-${var.stage}-${var.name}-elasticsearch-eip"
}
}
variable "region" {
}
variable "namespace" {
type = string
description = "Namespace, which could be your organization name, e.g. 'eg' or 'cp'"
}
variable "stage" {
type = string
description = "Stage, e.g. 'prod', 'staging', 'dev' or 'testing'"
}
variable "name" {
type = string
description = "Solution name, e.g. 'app' or 'cluster'"
}
#remote-state #aws #eip #terraform
1619263860
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
#terraform-aws #terraform #aws #aws-ec2
1620959460
We’re continuing our series on Terraform AWS with a post that breaks down the basics. The world of Terraform AWS can be described as complex — from AWS storage to AWS best practices, there’s a depth of knowledge necessary to get familiar with Terraform AWS.
Whether you’re an expert at Terraform AWS or just getting started, it’s our goal at InfraCode to provide you with clear and easy-to-understand information at every level. The number of resources out there is abundant but overwhelming. That’s why we create simplified guides that are immediately usable and always understandable.
In this article, we’ll dive into:
#aws-ec2 #aws #terraform #terraform aws
1597945860
A few months ago, I was working on a Terraform module to manage all the roles and their permissions in our AWS accounts. This on the surface seems like a straight forward project, but there was a curveball that required some research, trial & error, and finesse to address.
The teams/permissions were not consistent across the AWS accounts. TeamA might have read/write access to s3 in account A, but only have read access to s3 in account B. Team A does not even exist in account C. Multiply this conundrum by 10+ teams across 10+ accounts.
In thinking about how to best tackle this issue, there were a couple bad ways to solve this that immediately come to mind:
This approach is horrible. It would have been tedious, hard to maintain, and the amount of repeated code would have been astronomical, but it would have worked.
This on the surface seems reasonable but it is not. First, your code is dictating business logic/function. Secondly, the principle of least privilege means that you should only allow enough access to perform the required job. Third, there are AWS accounts which certain teams should not have access to (e.g. secops, networking, & IT accounts). Last, the business would never agree to it.
The right approach needed to something that could account for all the variability across the accounts. Additionally, the end result needed to be clean, easy to maintain/update, and easy to use without requiring a deep understanding of how the module worked.
What I envisioned was something that allowed me to define the permissions as part of the config. This design addressed the variability issues across the accounts by allowing me to define the permissions per iteration of the module. Additionally, it was easy to understand and manage (even if you didn’t know what the module was doing).
This looked something like:
module usermap {
source = "../modules/example-module"
role_map_aws_policies = {
TeamA = ["AdministratorAccess"]
TeamB = ["AmazonS3FullAccess", "AmazonEC2FullAccess"]
TeamC = ["AdministratorAccess"]
TeamD = ["ReadOnlyAccess", "AmazonInspectorFullAccess"]
}
}
#aws #aws-iam #automating-aws-iam #terraform #terraform-modules
1601341562
Bob had just arrived in the office for his first day of work as the newly hired chief technical officer when he was called into a conference room by the president, Martha, who immediately introduced him to the head of accounting, Amanda. They exchanged pleasantries, and then Martha got right down to business:
“Bob, we have several teams here developing software applications on Amazon and our bill is very high. We think it’s unnecessarily high, and we’d like you to look into it and bring it under control.”
Martha placed a screenshot of the Amazon Web Services (AWS) billing report on the table and pointed to it.
“This is a problem for us: We don’t know what we’re spending this money on, and we need to see more detail.”
Amanda chimed in, “Bob, look, we have financial dimensions that we use for reporting purposes, and I can provide you with some guidance regarding some information we’d really like to see such that the reports that are ultimately produced mirror these dimensions — if you can do this, it would really help us internally.”
“Bob, we can’t stress how important this is right now. These projects are becoming very expensive for our business,” Martha reiterated.
“How many projects do we have?” Bob inquired.
“We have four projects in total: two in the aviation division and two in the energy division. If it matters, the aviation division has 75 developers and the energy division has 25 developers,” the CEO responded.
Bob understood the problem and responded, “I’ll see what I can do and have some ideas. I might not be able to give you retrospective insight, but going forward, we should be able to get a better idea of what’s going on and start to bring the cost down.”
The meeting ended with Bob heading to find his desk. Cost allocation tags should help us, he thought to himself as he looked for someone who might know where his office is.
#aws #aws cloud #node js #cost optimization #aws cli #well architected framework #aws cost report #cost control #aws cost #aws tags