Lindsey  Koepp

Lindsey Koepp

1603465860

Creating an S3 Bucket Module in Terraform

Before I get started, you can go find my code in my repo at this link.

This bucket module is going to be made of a few different files.

  1. Main.tf — for configuration
  2. Variables.tf — for variables
  3. Outputs.tf — for outputs

First we will take a look at the main.tf configuration.

Main.tf File

resource "aws_s3_bucket" "b" {
  bucket_prefix = var.bucket_prefix
  acl    = var.acl

versioning {
        enabled = var.versioning
    }
logging {
        target_bucket = var.target_bucket
        target_prefix = var.target_prefix
    }
server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = var.kms_master_key_id
        sse_algorithm     = var.sse_algorithm
      }
    }
  }
tags = var.tags
}

We are going to do a couple things here that I want to note. First, we will be setting variables for every argument so that we can create some defaults. Second, we are choosing to use the bucket_prefix argument rather than the bucket argument. That way we don’t accidentally try to create a bucket with the same name as one that already exists in the global namespace.

When we use bucket_prefix it would be best to name the bucket something like **my-bucket- **that way the string added to the end of the bucket name comes after the dash.

#terraform #devops #aws #hashicorp-terraform #aws-s3

What is GEEK

Buddha Community

Creating an S3 Bucket Module in Terraform
Lindsey  Koepp

Lindsey Koepp

1603465860

Creating an S3 Bucket Module in Terraform

Before I get started, you can go find my code in my repo at this link.

This bucket module is going to be made of a few different files.

  1. Main.tf — for configuration
  2. Variables.tf — for variables
  3. Outputs.tf — for outputs

First we will take a look at the main.tf configuration.

Main.tf File

resource "aws_s3_bucket" "b" {
  bucket_prefix = var.bucket_prefix
  acl    = var.acl

versioning {
        enabled = var.versioning
    }
logging {
        target_bucket = var.target_bucket
        target_prefix = var.target_prefix
    }
server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = var.kms_master_key_id
        sse_algorithm     = var.sse_algorithm
      }
    }
  }
tags = var.tags
}

We are going to do a couple things here that I want to note. First, we will be setting variables for every argument so that we can create some defaults. Second, we are choosing to use the bucket_prefix argument rather than the bucket argument. That way we don’t accidentally try to create a bucket with the same name as one that already exists in the global namespace.

When we use bucket_prefix it would be best to name the bucket something like **my-bucket- **that way the string added to the end of the bucket name comes after the dash.

#terraform #devops #aws #hashicorp-terraform #aws-s3

Kole  Haag

Kole Haag

1604012400

Using Terraform to Create an S3 Website Bucket

Creating the Provider Block

First we are going to need to create the provider code block in our main.tf.

provider "aws" {  
  version = "~> 2.0"  
  region  = var.region
}

Here we made sure to set region to var.region so that we can specify the region in our child modules.

Creating the S3 Bucket

Now we need to add in the code block for our S3 Bucket.

resource "aws_s3_bucket" "prod_website" {  
  bucket_prefix = var.bucket_prefix  
  acl    = "public-read"   

  website {    
    index_document = "index.html"    
    error_document = "error.html"   

  }
}

Now in this block you can see that we set a variable for bucket, but just set public-read for our acl. We want to make sure that we can set a value for bucket_prefix in the child module which is why we set a variable here.

For website we are going to keep the classic index.html and error.html, but feel free to change these if your use case calls for it.

Creating the Bucket Policy

Last we need to create a bucket policy. We are going to allow public get for all of the objects in our bucket, so we will use this code for our policy.

resource "aws_s3_bucket_policy" "prod_website" {  
  bucket = aws_s3_bucket.prod_website.id   

policy = <<POLICY
{    
    "Version": "2012-10-17",    
    "Statement": [        
      {            
          "Sid": "PublicReadGetObject",            
          "Effect": "Allow",            
          "Principal": "*",            
          "Action": [                
             "s3:GetObject"            
          ],            
          "Resource": [
             "arn:aws:s3:::${aws_s3_bucket.prod_website.id}/*"            
          ]        
      }    
    ]
}
POLICY
}

For the policy we need to set the resource addressing as above so it targets our bucket. Then set the policy itself which is going to allow public read and get object on all contents inside of the bucket that is defined by var.bucket.

Creating the variables.tf File

It is time to create our variables file. We just need to create variables for everything we set variables for in the main.tf. That would be **var.bucket_prefix **and var.region.

variable "bucket_prefix" {  
  type        = string  
  description = "Name of the s3 bucket to be created."
} 
variable "region" {  
  type        = string  
  default     = "us-east-1"  
  description = "Name of the s3 bucket to be created."
}

I set the default region as us-east-1, but you can set it as whatever works best for you.

Creating outputs.tf File

The outputs will only need one output in order for this module to work.

output "s3_bucket_id" {
  value = aws_s3_bucket.prod_website.id
}

Since we are referencing the id for the s3 bucket in the child modules we want to include it here so that the parent module is able to read the output from the child module.

Usage

module prod_website { 
source = “github.com/jakeasarus/terraform/s3_website_no_cloudfront” 
bucket_prefix = “this-is-only-a-test-bucket-delete-me-”
}

Your usage may vary in source depending on where you put your files. Also do not forget to set your provider block!

Conclusion

I hope you enjoyed this article and got some value out of it! Soon I will add another article that covers adding in a cloudfront distribution!

If you are interested in learning more about Terraform I have a Free Terraform Course for getting started and a course to help you study for your HashiCorp Certified: Terraform Associate.

I also highly suggest checking out Terraform Up & Running by Yevgeniy Brikman.

Happy learning!

#devops #hashicorp #terraform #terraform-modules #infrastructure-as-code

Lindsey  Koepp

Lindsey Koepp

1604039160

How to Mount S3 Bucket on an EC2 Linux Instance

So after you are done with that, we can now move on to mount S3 as the file system for the EC2 instance.

An S3 bucket can be mounted in an AWS instance as a file system known as S3fs. S3fs is a FUSE file-system that allows you to mount an Amazon S3 bucket as a local file-system. It behaves like a network-attached drive, as it does not store anything on the Amazon EC2, but the user can access the data on S3 from the EC2 instance.

Filesystem in Userspace (FUSE) is a simple interface for userspace programs to export a virtual file-system to the Linux kernel. It also aims to provide a secure method for non-privileged users to create and mount their own file-system implementations.

S3fs-fuse project is written in python backed by Amazons Simple Storage Service. Amazon offers an open API to build applications on top of this service, which several companies have done, using various interfaces (web, sync, fuse, etc.).

Follow the below steps to mount your S3 bucket to Your Linux Instance.

This Tutorial assumes that you have a running Linux EC2 instance on AWS with root access and a bucket created in S3, which is to be mounted on your Linux Instance. You will also require an Access and Secret key pair with sufficient permissions of S3 or an IAM access to generate or Create it.

We will perform the steps as a root user. You can also use the sudo command if you are a regular user with sudo access. So let us get started.

Set up everything properly

export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_COLLATE=C
export LC_CTYPE=en_US.UTF-8

run this to make sure.

Step-1

For CentOS or Red Hat

yum update all

For Ubuntu

apt-get update

#ec2 #s3 #s3-bucket #terraform #aws

Getting Started With Terraform Modules

Introduction

In this article, we will see a subtle introduction to terraform modules, how to pass data into the module, get something from the module and create a resource (GKE cluster), it’s intended to be as simple as possible just to be aware of what a module is composed of, or how can you do your own modules, sometimes it makes sense to have modules to abstract implementations that you use over several projects, or things that are often repeated along the project. So let’s see what it takes to create and use a module.

The source code for this article can be found here. Note that in this example I’m using GCP since they give you $300 USD for a year to try their services and it looks pretty good so far, after sign-up you will need to go to IAM, then create a service account and after that export the key (this is required for the terraform provider to talk to GCP).

Composition of a Module

A module can be any folder with a main.tf file in it, yes, that is the only required file for a module to be usable, but the recommendation is that you also put a README.md file with a description of the module if it’s intended to be used by people if it’s a sub-module it’s not necessary, also you will need a file called variables.tf and other outputs.tf of course if it’s a big module that cannot be split into sub-modules you can split those files for convenience or readability, variables should have descriptions so the tooling can show you what are they for, you can read more about the basics for a module here.

Before moving on let’s see the folder structure of our project:

Java

1

├── account.json

2

├── LICENSE

3

├── main.tf

4

├── module

5

│   ├── main.tf

6

│   ├── outputs.tf

7

│   └── variables.tf

8

├── README.md

9

└── terraform.tfvars

10

11

1 directory, 8 files

The Project

Let’s start with the main.tf that will call our module, notice that I added a few additional comments but it’s pretty much straight forward, we set the provider, then we define some variables, call our module and print some output (output can also be used to pass data between modules).

Java

1

## Set the provider to be able to talk to GCP

2

provider "google" {

3

  credentials = "${file("account.json")}"

4

  project     = "${var.project_name}"

5

  region      = "${var.region}"

6

}

7

8

## Variable definition

9

variable "project_name" {

10

  default = "testinggcp"

11

  type    = "string"

12

}

13

14

variable "cluster_name" {

15

  default = "demo-terraform-cluster"

16

  type    = "string"

17

}

18

19

variable "region" {

20

  default = "us-east1"

21

  type    = "string"

22

}

23

24

variable "zone" {

25

  default = "us-east1-c"

26

  type    = "string"

27

}

28

29

## Call our module and pass the var zone in, and get cluster_name out

30

module "terraform-gke" {

31

  source = "./module"

32

  zone = "${var.zone}"

33

  cluster_name = "${var.cluster_name}"

34

}

35

36

## Print the value of k8s_master_version

37

output "kubernetes-version" {

38

  value = module.terraform-gke.k8s_master_version

39

}

Then terraform.tfvars has some values to override the defaults that we defined:

Java

1

project_name = "testingcontainerengine"

2

cluster_name = "demo-cluster"

3

region = "us-east1"

4

zone = "us-east1-c"

#tutorial #devops #terraform #gcp cloud #terraform tutorial #kubernetes for beginners #terraform modules

Lindsey  Koepp

Lindsey Koepp

1602953760

Terraform Imports: Resources, Modules, for_each, and Count

If you are developing Terraform you will at some point work with Terraform imports. A simple web search yields plenty of results for simple imports of Terraform resources. However, often missing are some of the more complex or nuanced imports one might encounter in the real world (such as importing modules or resources created from for_each and count).

This guide will quickly cover the generic examples you find easily on the web, focus on some more unique stuff usually hidden in forum posts, and provide a handful of techniques I’ve picked up since Terraform imports became a functionality.

This guide assumes the reader has a good understanding of Terraform, Terraform modules, state file manipulation, and CI/CD. I’ll be using AWS for the examples.

Resource Import

This is perhaps the most prevalent example when searching for Terraform imports. Quite simply you have a resource defined in your Terraform code, some infrastructure out in the environment matching your Terraform resource definition, and you want to import that infrastructure into your Terraform state.

This example is for an aws_iam_user. I’ve already created a user named “bill” via the AWS IAM console and I would like to import this user into my Terraform state. Easy enough!

My Terraform code:

resource "aws_iam_user" "bill" {
  name = "bill"
  tags = { "foo" = "bar" }
}

A simple command:

$ terraform import aws_iam_user.bill bill

#aws #terraform-import #infrastructure-as-code #terraform-modules #terraform