1677030629
Terraform is an open-source “Infrastructure as Code” tool. It allows you to define and manage your infrastructure automatedly. With Terraform, you can write and maintain reusable code for provisioning cloud infrastructure, like servers and databases, on multiple providers such as AWS, Google Cloud Platform, and Azure. This makes it easier to deploy and manage multiple resources quickly and efficiently.
Terraform also simplifies the process of upgrading and managing infrastructure over time, as you can easily apply changes to existing code or add new resources. With Terraform, businesses can build secure, reliable, and cost-effective cloud environments.
It is a powerful tool for automating the management of your infrastructure while also improving its scalability, security, and cost-effectiveness. In short, Terraform is a powerful tool that helps businesses automate their infrastructure deployment and manage it reliably and cost-efficiently.
Read More Why Terraform is an essential tool for devops engineers
1676634002
Declarative, easy-to-use and safe Dependency Injection framework for Swift (iOS/macOS/Linux)
Dependency Injection basically means "giving an object its instance variables" ¹. It seems like it's not such a big deal, but as soon as a project gets bigger, it gets tricky. Initializers become too complex, passing down dependencies through several layers becomes time consuming and just figuring out where to get a dependency from can be hard enough to give up and finally use a singleton.
However, Dependency Injection is a fundamental aspect of software architecture, and there is no good reason not to do it properly. That's where Weaver can help.
Weaver is a declarative, easy-to-use and safe Dependency Injection framework for Swift.
|-> validate() -> valid/invalid
swift files -> scan() -> [Token] -> parse() -> AST -> link() -> Graph -> |
|-> generate() -> source code
Weaver scans the Swift sources of the project, looking for annotations, and generates an AST (abstract syntax tree). It uses SourceKitten which is backed by Apple's SourceKit.
The AST then goes through a linking phase, which outputs a dependency graph.
Some safety checks are then performed on the dependency graph in order to ensure that the generated code won't crash at runtime. Issues are friendly reported in Xcode to make their correction easier.
Finally, Weaver generates the boilerplate code which can directly be used to make the dependency injections happen.
Weaver can be installed using Homebrew
, CocodaPods
or manually.
Download the latest release with the prebuilt binary from release tab. Unzip the archive into the desired destination and run bin/weaver
$ brew install weaver
Add the following to your Podfile
:
pod 'WeaverDI'
This will download the Weaver binaries and dependencies in Pods/ during your next pod install execution and will allow you to invoke it via ${PODS_ROOT}/WeaverDI/weaver/bin/weaver
in your Script Build Phases.
This is the best way to install a specific version of Weaver since Homebrew cannot automatically install a specific version.
To use Weaver via Mint, prefix the normal usage with mint run scribd/Weaver like so:
mint run scribd/Weaver version
To use a specific version of Weaver, add the release tag like so:
mint run scribd/Weaver@1.0.7 version
Download the latest release source code from the release tab or clone the repository.
In the project directory, run brew update && brew bundle && make install
to build and install the command line tool.
Run the following to check if Weaver has been installed correctly.
$ weaver swift --help
Usage:
$ weaver swift
Options:
--project-path - Project's directory.
--config-path - Configuration path.
--main-output-path - Where the swift code gets generated.
--tests-output-path - Where the test helpers gets generated.
--input-path - Paths to input files.
--ignored-path - Paths to ignore.
--cache-path - Where the cache gets stored.
--recursive-off
--tests - Activates the test helpers' generation.
--testable-imports - Modules to imports in the test helpers.
--swiftlint-disable-all - Disables all swiftlint rules.
In Xcode, add the following command to a command line build phase:
weaver swift --project-path $PROJECT_DIR/$PROJECT_NAME --main-output-path output/relative/path
Important - Move this build phase above the Compile Source
phase so that Weaver can generate the boilerplate code before compilation happens.
For a more complete usage example, please check out the sample project.
Let's implement a simple app displaying a list of movies. It will be composed of three noticeable objects:
AppDelegate
where the dependencies are registered.MovieManager
providing the movies.MoviesViewController
showing a list of movies at the screen.Let's get into the code.
AppDelegate
with comment annotations:
@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
private let dependencies = MainDependencyContainer.appDelegateDependencyResolver()
// weaver: movieManager = MovieManager <- MovieManaging
// weaver: movieManager.scope = .container
// weaver: moviesViewController = MoviesViewController <- UIViewController
// weaver: moviesViewController.scope = .container
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
window = UIWindow()
let rootViewController = dependencies.moviesViewController
window?.rootViewController = UINavigationController(rootViewController: rootViewController)
window?.makeKeyAndVisible()
return true
}
}
AppDelegate
registers two dependencies:
// weaver: movieManager = MovieManager <- MovieManaging
// weaver: moviesViewController = MoviesViewController <- UIViewController
These dependencies are made accessible to any object built from AppDelegate
because their scope is set to container
:
// weaver: movieManager.scope = .container
// weaver: moviesViewController.scope = .container
A dependency registration automatically generates the registration code and one accessor in AppDelegateDependencyContainer
, which is why the rootViewController
can be built:
let rootViewController = dependencies.moviesViewController
.AppDelegate
with property wrapper annotations:
Since Weaver 1.0.1, you can use property wrappers instead of annotations in comments.
@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
// Must be declared first!
private let dependencies = MainDependencyContainer.appDelegateDependencyResolver()
@Weaver(.registration, type: MovieManager.self, scope: .container)
private var movieManager: MovieManaging
@Weaver(.registration, type: MoviesViewController.self, scope: .container)
private var moviesViewController: UIViewController
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
window = UIWindow()
window?.rootViewController = UINavigationController(rootViewController: moviesViewController)
window?.makeKeyAndVisible()
return true
}
}
Note how dependencies can be accessed from the self
instance directly.
Also note that the dependencies object must be declared and created prior to any other Weaver annotation. Not doing so would immediately crash the application.
It is possible to use comment and property wrapper annotations in the same type.
MovieManager
:
protocol MovieManaging {
func getMovies(_ completion: @escaping (Result<Page<Movie>, MovieManagerError>) -> Void)
}
final class MovieManager: MovieManaging {
func getMovies(_ completion: @escaping (Result<Page<Movie>, MovieManagerError>) -> Void) {
// fetches movies from the server...
completion(.success(movies))
}
}
MoviesViewController
with comment annotations:
final class MoviesViewController: UIViewController {
private let dependencies: MoviesViewControllerDependencyResolver
private var movies = [Movie]()
// weaver: movieManager <- MovieManaging
required init(injecting dependencies: MoviesViewControllerDependencyResolver) {
self.dependencies = dependencies
super.init(nibName: nil, bundle: nil)
}
override func viewDidLoad() {
super.viewDidLoad()
// Setups the tableview...
// Fetches the movies
dependencies.movieManager.getMovies { result in
switch result {
case .success(let page):
self.movies = page.results
self.tableView.reloadData()
case .failure(let error):
self.showError(error)
}
}
}
// ...
}
MoviesViewController
declares a dependency reference:
// weaver: movieManager <- MovieManaging
This annotation generates an accessor in MoviesViewControllerDependencyResolver
, but no registration, which means MovieManager
is not stored in MoviesViewControllerDependencyContainer
, but in its parent (the container from which it was built). In this case, AppDelegateDependencyContainer
.
MoviesViewController
also needs to declare a specific initializer:
required init(injecting dependencies: MoviesViewControllerDependencyResolver)
This initializer is used to inject the DI Container. Note that MoviesViewControllerDependencyResolver
is a protocol, which means a fake version of the DI Container can be injected when testing.
MoviesViewController
with property wrapper annotations:
final class MoviesViewController: UIViewController {
private var movies = [Movie]()
@Weaver(.reference)
private var movieManager: MovieManaging
required init(injecting _: MoviesViewControllerDependencyResolver) {
super.init(nibName: nil, bundle: nil)
}
override func viewDidLoad() {
super.viewDidLoad()
// Setups the tableview...
// Fetches the movies
movieManager.getMovies { result in
switch result {
case .success(let page):
self.movies = page.results
self.tableView.reloadData()
case .failure(let error):
self.showError(error)
}
}
}
// ...
}
Weaver allows you to declare dependencies by annotating the code with comments like // weaver: ...
or property wrappers like @Weaver(...) var ...
It currently supports the following annotations:
Adds the dependency builder to the container.
Adds an accessor for the dependency to the container's resolver protocol.
Example:
// weaver: dependencyName = DependencyConcreteType <- DependencyProtocol
@Weaver(.registration, type: DependencyConcreteType.self)
var dependencyName: DependencyProtocol
or
// weaver: dependencyName = DependencyConcreteType
@Weaver(.registration)
var dependencyName: DependencyConcreteType
dependencyName
: Dependency's name. Used to make reference to the dependency in other objects and/or annotations.
DependencyConcreteType
: Dependency's implementation type. Can be a struct
or a class
.
DependencyProtocol
: Dependency's protocol
if any. Optional, you can register a dependency with its concrete type only.
Adds an accessor for the dependency to the container's protocol.
Example:
// weaver: dependencyName <- DependencyType
@Weaver(.reference)
var dependencyName: DependencyType
DependencyType
: Either the concrete or abstract type of the dependency. This also defines the type the dependency's accessor returns.
Adds a parameter to the container's resolver protocol. This means that the generated container needs to take these parameter at initialisation. It also means that all the concerned dependency accessors need to take this parameter.
Example:
// weaver: parameterName <= ParameterType
@Weaver(.parameter)
var parameterName: ParameterType
Sets the scope of a dependency. The default scope being container
. Only works for registrations or weak parameters.
The scope
defines a dependency lifecycle. Four scopes are available:
transient
: Always creates a new instance when resolved.
container
: Builds an instance at initialization of its container and lives as long as its container lives.
weak
: A new instance is created when resolved the first time and then lives as long as its strong references are living.
lazy
: A new instance is created when resolved the first time with the same lifetime than its container.
Example:
// weaver: dependencyName.scope = .scopeValue
@Weaver(.registration, scope: .scopeValue)
var dependencyName: DependencyType
scopeValue
: Value of the scope. It can be one of the values described above.
Overrides a dependency's default initialization code.
Works for registration annotations only.
Example:
// weaver: dependencyName.builder = DependencyType.make
@Weaver(.registration, builder: DependencyType.make)
var dependencyName: DependencyType
DependencyType.make
: Code overriding the dependency's initialization code taking DependencyTypeInputDependencyResolver
as a parameter and returning DependencyType
(e.g. make
's signature could be static func make(_ dependencies: DependencyTypeInputDependencyResolver) -> DependencyType
).
Warning - Make sure you don't do anything unsafe with the DependencyResolver
parameter passed down in this method since it won't be caught by the dependency graph validator.
Sets a configuration attribute to the concerned object.
Example:
// weaver: dependencyName.attributeName = aValue
@Weaver(..., attributeName: aValue, ...)
var dependencyName: DependencyType
Configuration Attributes:
isIsolated: Bool
(default: false
): any object setting this to true is considered by Weaver as an object which isn't used in the project. An object flagged as isolated can only have isolated dependents. This attribute is useful to develop a feature wihout all the dependencies setup in the project.
setter: Bool
(default: false
): generates a setter (setDependencyName(dependency)
) in the dependency container. Note that a dependency using a setter has to be set manually before being accessed through a dependency resolver or it will crash.
objc: Bool
(default: false
): generates an ObjC compliant resolver for a given dependency, allowing it be accessed from ObjC code.
escaping: Bool
(default: true
when applicable): asks Weaver to use @escaping
when declaring a closure parameter.
platforms: [Platform]
(default: []
): List of platforms for which Weaver is allowed to use the dependency. An empty list means any platform is allowed.
Types using parameter annotations need to take the said parameters as an input when being registered or referenced. This is particularly true when using property wrappers, because the signature of the annotation won't compile if not done correctly.
For example, the following shows how a type taking two parameters at initialization can be annotated:
final class MovieViewController {
@Weaver(.parameter) private var movieID: Int
@Weaver(.parameter) private var movieTitle: String
}
And how that same type can be registered and referenced:
@WeaverP2(.registration)
private var movieViewController: (Int, String) -> MovieViewController
@WeaverP2(.reference)
private var moviewViewController: (Int, String) -> MovieViewController
Note that Weaver generates one property wrapper per amount of input parameters, so if a type takes one parameter WeaverP1
shall be used, for two parameters, WeaverP2
, and so on.
Weaver can also generate a dependency container stub which can be used for testing. This feature is accessible by adding the option --tests
to the command (e.g. weaver swift --tests
).
To compile, the stub expects certain type doubles to be implemented.
For example, given the following code:
final class MovieViewController {
@Weaver(.reference) private var movieManager: MovieManaging
}
The generated stub expects MovieManagingDouble
to be implemented in order to compile.
Testing MoviewViewController
can then be written like the following:
final class MovieViewControllerTests: XCTestCase {
func test_view_controller() {
let dependencies = MainDependencyResolverStub()
let viewController = dependencies.buildMovieViewController()
viewController.viewDidLoad()
XCTAssertEqual(dependencies.movieManagerDouble.didRequestMovies, true)
}
}
To generate the boilerplate code, the swift
command shall be used.
$ weaver swift --help
Usage:
$ weaver swift
Options:
--project-path - Project's directory.
--config-path - Configuration path.
--main-output-path - Where the swift code gets generated.
--tests-output-path - Where the test helpers gets generated.
--input-path - Paths to input files.
--ignored-path - Paths to ignore.
--cache-path - Where the cache gets stored.
--recursive-off
--tests - Activates the test helpers' generation.
--testable-imports - Modules to imports in the test helpers.
--swiftlint-disable-all - Disables all swiftlint rules.
--platform - Targeted platform.
--included-imports - Included imports.
--excluded-imports - Excluded imports.
weaver swift --project-path $PROJECT_DIR/$PROJECT_NAME --main-output-path Generated
--project-path
: Acts like a base path for other relative paths like config-path
, output-path
, template-path
, input-path
and ignored-path
. It defaults to the running directory.--config-path
: Path to a configuration file. By defaults, Weaver automatically detects .weaver.yaml
and .weaver.json
located at project-path
.--main-output-path
: Path where the code will be generated. Defaults to project-path
.--tests-output-path
: Path where the test utils code will be generated. Defaults to project-path
.--input-path
: Path to the project's Swift code. Defaults to project-path
. Variadic parameter, which means it can be set more than once. By default, Weaver recursively read any Swift file located under the input-path
.--ignored-path
: Same than input-path
but for ignoring files which shouldn't be parsed by Weaver.--recursive-off
: Deactivates recursivity for input-path
and ignored-path
.--tests
- Activates the test helpers' generation.--testable-imports
- Modules to imports in the test helpers. Variadic parameter, which means it can be set more than once.--swiftlint-disable-all
- Disables all swiftlint rules in generated files.--platform
- Platform for which the generated code will be compiled (iOS, watchOS, OSX, macOS or tvOS).--included-imports
- Modules which can be imported in generated files.--excluded-imports
- Modules which can't be imported in generated files.Weaver can read a configuration file rather than getting its parameters from the command line. It supports both json
and yaml
formats.
To configure Weaver with a file, write a file named .weaver.yaml
or .weaver.json
at the root of your project.
Parameters are named the same, but snakecased. They also work the same way with one exception, project_path
cannot be defined in a configuration. Weaver automatically set its value to the configuration file location.
For example, the sample project configuration looks like:
main_output_path: Sample/Generated
input_paths:
- Sample
ignored_paths:
- Sample/Generated
In order to avoid parsing the same swift files over and over again, Weaver has a cache system built in. It means that Weaver won't reprocess files which haven't been changed since last time they got processed.
Using this functionality is great in a development environment because it makes Weaver's build phase much faster most of the time. However, on a CI it is preferable to let Weaver process the Swift files everytime for safety, for which the clean command can be used.
For example, the following always processes all of the swift code:
$ weaver clean
$ weaver swift
Weaver can ouput a JSON representation of the dependency graph of a project.
$ weaver json --help
Usage:
$ weaver json
Options:
--project-path - Project's directory.
--config-path - Configuration path.
--pretty [default: false]
--input-path - Paths to input files.
--ignored-path - Paths to ignore.
--cache-path - Cache path.
--recursive-off
--platform - Selected platform
For an output example, please check this Gist.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)If you're looking for a step by step tutorial, check out these links.
Author: scribd
Source Code: https://github.com/scribd/Weaver
License: MIT license
1676540640
Follow this tutorial to see how using GitLab can further enhance collaboration in your OpenStack cluster.
One virtue of GitOps is Infrastructure as Code. It encourages collaboration by using a shared configuration and policy repository. Using GitLab can further enhance collaboration in your OpenStack cluster. GitLab CI can serve as your source control and orchestration hub for CI/CD, and it can even manage the state of Terraform.
To achieve this, you need the following:
The goal is to achieve collaboration through Terraform, so you need to have a centralized state file. GitLab has a managed state for Terraform. With this feature, you can enable individuals to manage OpenStack collaboratively.
Log in to GitLab, click on the hamburger menu, and click Groups→View all groups.
(AJ Canlas, CC BY-SA 4.0)
Create a group by clicking on New group and then on Create group.
(AJ Canlas, CC BY-SA 4.0)
Name the group to generate a unique group URL, and invite your team to work with you.
(AJ Canlas, CC BY-SA 4.0)
After creating a group, create a project by clicking Create new project, and then Create blank project:
(AJ Canlas, CC BY-SA 4.0)
Name your project. GitLab generates a unique project URL for you. This project contains the repository for your Terraform scripts and Terraform state.
The repository needs a personal access token to manage this Terraform state. In your profile, select Edit Profile:
(AJ Canlas, CC BY-SA 4.0)
Click Access Token in the side panel to access a menu for creating an access token. Save your token because you can't view it again.
(AJ Canlas, CC BY-SA 4.0)
On a computer with direct access to your OpenStack installation, clone the repository and then change to the resulting directory:
$ git clone git@gitlab.com:testgroup2170/testproject.git
$ cd testproject
Create a backend file to configure GitLab as your state backend:
$ cat >> backend.tf << EOF
terraform {
backend "http" {
}
}
EOF
This provider file pulls the provider for OpenStack:
$ cat >> provider.tf << EOF
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "1.49.0"
}
}
}
provider "openstack" {
user_name = var.OS_USERNAME
tenant_name = var.OS_TENANT
password = var.OS_PASSWORD
auth_url = var.OS_AUTH_URL
region = var.OS_REGION
}
EOF
Because you've declared a variable in the provider, you must declare it in a variable file:
$ cat >> variables.tf << EOF
variable "OS_USERNAME" {
type = string
description = "OpenStack Username"
}
variable "OS_TENANT" {
type = string
description = "OpenStack Tenant/Project Name"
}
variable "OS_PASSWORD" {
type = string
description = "OpenStack Password"
}
variable "OS_AUTH_URL" {
type = string
description = "OpenStack Identitiy/Keystone API for authentication"
}
variable "OS_REGION" {
type = string
description = "OpenStack Region"
}
EOF
Because you're initially working locally, you must set those variables to make it work:
$ cat >> terraform.tfvars << EOF
OS_USERNAME = "admin"
OS_TENANT = "admin"
OS_PASSWORD = "YYYYYYYYYYYYYYYYYYYYYY"
OS_AUTH_URL = "http://X.X.X.X:35357/v3"
OS_REGION = "RegionOne"
EOF
These details are available on your rc
file on OpenStack.
Initializing the project is quite different because you need to tell Terraform to use GitLab as your state backend:
PROJECT_ID="<gitlab-project-id>"
TF_USERNAME="<gitlab-username>"
TF_PASSWORD="<gitlab-personal-access-token>"
TF_STATE_NAME="<your-unique-state-name>"
TF_ADDRESS="https://gitlab.com/api/v4/projects/${PROJECT_ID}/terraform/state/${TF_STATE_NAME}"
$ terraform init \
-backend-config=address=${TF_ADDRESS} \
-backend-config=lock_address=${TF_ADDRESS}/lock \
-backend-config=unlock_address=${TF_ADDRESS}/lock \
-backend-config=username=${TF_USERNAME} \
-backend-config=password=${TF_PASSWORD} \
-backend-config=lock_method=POST \
-backend-config=unlock_method=DELETE \
-backend-config=retry_wait_min=5
To view the gitlab-project-id
, look in the project details just above the Project Information tab in the side panel. It's usually your project name.
(AJ Canlas, CC BY-SA 4.0)
For me, it's 42580143
.
Use your username for gitlab-username
. Mine is ajohnsc
.
The gitlab-personal-access-token
is the token you created earlier in this exercise. In this example, I use wwwwwwwwwwwwwwwwwwwww
. You can name your-unique-state-name
anything. I used homelab
.
Here is my initialization script:
PROJECT_ID="42580143"
TF_USERNAME="ajohnsc"
TF_PASSWORD="wwwwwwwwwwwwwwwwwwwww"
TF_STATE_NAME="homelab"
TF_ADDRESS="https://gitlab.com/api/v4/projects/${PROJECT_ID}/terraform/state/${TF_STATE_NAME}"
To use the file:
$ terraform init \
-backend-config=address=${TF_ADDRESS} \
-backend-config=lock_address=${TF_ADDRESS}/lock \
-backend-config=unlock_address=${TF_ADDRESS}/lock \
-backend-config=username=${TF_USERNAME} \
-backend-config=password=${TF_PASSWORD} \
-backend-config=lock_method=POST \
-backend-config=unlock_method=DELETE \
-backend-config=retry_wait_min=5
The output is similar to this:
(AJ Canlas, CC BY-SA 4.0)
This sets the size of the VMs for my OpenStack flavors:
$ cat >> flavors.tf << EOF
resource "openstack_compute_flavor_v2" "small-flavor" {
name = "small"
ram = "4096"
vcpus = "1"
disk = "0"
flavor_id = "1"
is_public = "true"
}
resource "openstack_compute_flavor_v2" "medium-flavor" {
name = "medium"
ram = "8192"
vcpus = "2"
disk = "0"
flavor_id = "2"
is_public = "true"
}
resource "openstack_compute_flavor_v2" "large-flavor" {
name = "large"
ram = "16384"
vcpus = "4"
disk = "0"
flavor_id = "3"
is_public = "true"
}
resource "openstack_compute_flavor_v2" "xlarge-flavor" {
name = "xlarge"
ram = "32768"
vcpus = "8"
disk = "0"
flavor_id = "4"
is_public = "true"
}
EOF
The settings for my external network are as follows:
$ cat >> external-network.tf << EOF
resource "openstack_networking_network_v2" "external-network" {
name = "external-network"
admin_state_up = "true"
external = "true"
segments {
network_type = "flat"
physical_network = "physnet1"
}
}
resource "openstack_networking_subnet_v2" "external-subnet" {
name = "external-subnet"
network_id = openstack_networking_network_v2.external-network.id
cidr = "10.0.0.0/8"
gateway_ip = "10.0.0.1"
dns_nameservers = ["10.0.0.254", "10.0.0.253"]
allocation_pool {
start = "10.0.0.2"
end = "10.0.254.254"
}
}
EOF
Router settings look like this:
$ cat >> routers.tf << EOF
resource "openstack_networking_router_v2" "external-router" {
name = "external-router"
admin_state_up = true
external_network_id = openstack_networking_network_v2.external-network.id
}
EOF
Enter the following for images:
$ cat >> images.tf << EOF
resource "openstack_images_image_v2" "cirros" {
name = "cirros"
image_source_url = "https://download.cirros-cloud.net/0.6.1/cirros-0.6.1-x86_64-disk.img"
container_format = "bare"
disk_format = "qcow2"
}
EOF
Here is a Demo tenant:
$ cat >> demo-project-user.tf << EOF
resource "openstack_identity_project_v3" "demo-project" {
name = "Demo"
}
resource "openstack_identity_user_v3" "demo-user" {
name = "demo-user"
default_project_id = openstack_identity_project_v3.demo-project.id
password = "demo"
}
EOF
When complete, you will have this file structure:
.
├── backend.tf
├── demo-project-user.tf
├── external-network.tf
├── flavors.tf
├── images.tf
├── provider.tf
├── routers.tf
├── terraform.tfvars
└── variables.tf
After the files are complete, you can create the plan files with the terraform plan
command:
$ terraform plan
Acquiring state lock. This may take a few moments...
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# openstack_compute_flavor_v2.large-flavor will be created
+ resource "openstack_compute_flavor_v2" "large-flavor" {
+ disk = 0
+ extra_specs = (known after apply)
+ flavor_id = "3"
+ id = (known after apply)
+ is_public = true
+ name = "large"
+ ram = 16384
+ region = (known after apply)
+ rx_tx_factor = 1
+ vcpus = 4
}
[...]
Plan: 10 to add,
Releasing state lock. This may take a few moments...
After all plan files have been created, apply them with the terraform apply
command:
$ terraform apply -auto-approve
Acquiring state lock. This may take a few moments...
[...]
Plan: 10 to add, 0 to change, 0 to destroy.
openstack_compute_flavor_v2.large-flavor: Creating...
openstack_compute_flavor_v2.small-flavor: Creating...
openstack_identity_project_v3.demo-project: Creating...
openstack_networking_network_v2.external-network: Creating...
openstack_compute_flavor_v2.xlarge-flavor: Creating...
openstack_compute_flavor_v2.medium-flavor: Creating...
openstack_images_image_v2.cirros: Creating...
[...]
Releasing state lock. This may take a few moments...
Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
After applying the infrastructure, return to GitLab and navigate to your project. Look in Infrastructure → Terraform to confirm that the state homelab
has been created.
(AJ Canlas, CC BY-SA 4.0)
Now that you've created a state, try destroying the infrastructure so you can apply the CI pipeline later. Of course, this is purely for moving from Terraform CLI to a Pipeline. If you have an existing infrastructure, you can skip this step.
$ terraform destroy -auto-approve
Acquiring state lock. This may take a few moments...
openstack_identity_project_v3.demo-project: Refreshing state... [id=5f86d4229003404998dfddc5b9f4aeb0]
openstack_networking_network_v2.external-network: Refreshing state... [id=012c10f3-8a51-4892-a688-aa9b7b43f03d]
[...]
Plan: 0 to add, 0 to change, 10 to destroy.
openstack_compute_flavor_v2.small-flavor: Destroying... [id=1]
openstack_compute_flavor_v2.xlarge-flavor: Destroying... [id=4]
openstack_networking_router_v2.external-router: Destroying... [id=73ece9e7-87d7-431d-ad6f-09736a02844d]
openstack_compute_flavor_v2.large-flavor: Destroying... [id=3]
openstack_identity_user_v3.demo-user: Destroying... [id=96b48752e999424e95bc690f577402ce]
[...]
Destroy complete! Resources: 10 destroyed.
You now have a state everyone can use. You can provision using a centralized state. With the proper pipeline, you can automate common tasks.
Your OpenStack cluster isn't public-facing, and the OpenStack API isn't exposed. You must have a GitLab runner to run GitLab pipelines. GitLab runners are services or agents that run and perform tasks on the remote GitLab server.
On a computer on a different network, create a container for a GitLab runner:
$ docker volume create gitlab-runner-config
$ docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v gitlab-runner-config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
880e2ed289d3 gitlab/gitlab-runner:latest "/usr/bin/dumb-init …" 3 seconds ago Up 2 seconds gitlab-runner-test
Now register it with your project in your GitLab project's Settings → CI/CD panel:
(AJ Canlas, CC BY-SA 4.0)
Scroll down to Runners → Collapse:
(AJ Canlas, CC BY-SA 4.0)
The GitLab runner registration token and URL are required. Disable the shared runner on the right side to ensure it works on the runner only. Run the gitlab-runner
container to register the runner:
$ docker exec -ti gitlab-runner /usr/bin/gitlab-runner register
Runtime platform arch=amd64 os=linux pid=18 revision=6d480948 version=15.7.1
Running in system-mode.
Enter the GitLab instance URL (for example, https://gitlab.com/):
https://gitlab.com/
Enter the registration token:
GR1348941S1bVeb1os44ycqsdupRK
Enter a description for the runner:
[880e2ed289d3]: dockerhost
Enter tags for the runner (comma-separated):
homelab
Enter optional maintenance note for the runner:
WARNING: Support for registration tokens and runner parameters in the 'register' command has been deprecated in GitLab Runner 15.6 and will be replaced with support for authentication tokens. For more information, see https://gitlab.com/gitlab-org/gitlab/-/issues/380872
Registering runner... succeeded runner=GR1348941S1bVeb1o
Enter an executor: docker-ssh, shell, virtualbox, instance, kubernetes, custom, docker, parallels, ssh, docker+machine, docker-ssh+machine:
docker
Enter the default Docker image (for example, ruby:2.7):
ajscanlas/homelab-runner:3.17
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
Configuration (with the authentication token) was saved in "/etc/gitlab-runner/config.toml"
Upon success, your GitLab interface displays your runner as valid. It looks like this:
(AJ Canlas, CC BY-SA 4.0)
You can now use that runner to automate provisioning with a CI/CD pipeline in GitLab.
Now you can set up a pipeline. Add a file named .gitlab-ci.yaml
in your repository to define your CI/CD steps. Ignore the files you don't need, like .terraform
directories and sensitive data like variable files.
Here's my .gitignore
file:
$ cat .gitignore
*.tfvars
.terraform*
Here are my CI pipeline entries in .gitlab-ci.yaml
:
$ cat .gitlab-ci.yaml
default:
tags:
- homelab
variables:
TF_ROOT: ${CI_PROJECT_DIR}
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/homelab
cache:
key: homelab
paths:
- ${TF_ROOT}/.terraform*
stages:
- prepare
- validate
- build
- deploy
before_script:
- cd ${TF_ROOT}
tf-init:
stage: prepare
script:
- terraform --version
- terraform init -backend-config=address=${BE_REMOTE_STATE_ADDRESS} -backend-config=lock_address=${BE_REMOTE_STATE_ADDRESS}/lock -backend-config=unlock_address=${BE_REMOTE_STATE_ADDRESS}/lock -backend-config=username=${BE_USERNAME} -backend-config=password=${BE_ACCESS_TOKEN} -backend-config=lock_method=POST -backend-config=unlock_method=DELETE -backend-config=retry_wait_min=5
tf-validate:
stage: validate
dependencies:
- tf-init
variables:
TF_VAR_OS_AUTH_URL: ${OS_AUTH_URL}
TF_VAR_OS_PASSWORD: ${OS_PASSWORD}
TF_VAR_OS_REGION: ${OS_REGION}
TF_VAR_OS_TENANT: ${OS_TENANT}
TF_VAR_OS_USERNAME: ${OS_USERNAME}
script:
- terraform validate
tf-build:
stage: build
dependencies:
- tf-validate
variables:
TF_VAR_OS_AUTH_URL: ${OS_AUTH_URL}
TF_VAR_OS_PASSWORD: ${OS_PASSWORD}
TF_VAR_OS_REGION: ${OS_REGION}
TF_VAR_OS_TENANT: ${OS_TENANT}
TF_VAR_OS_USERNAME: ${OS_USERNAME}
script:
- terraform plan -out "planfile"
artifacts:
paths:
- ${TF_ROOT}/planfile
tf-deploy:
stage: deploy
dependencies:
- tf-build
variables:
TF_VAR_OS_AUTH_URL: ${OS_AUTH_URL}
TF_VAR_OS_PASSWORD: ${OS_PASSWORD}
TF_VAR_OS_REGION: ${OS_REGION}
TF_VAR_OS_TENANT: ${OS_TENANT}
TF_VAR_OS_USERNAME: ${OS_USERNAME}
script:
- terraform apply -auto-approve "planfile"
The process starts by declaring that every step and stage is under the homelab
tag, allowing your GitLab runner to run it.
default:
tags:
- homelab
Next, the variables are set on the pipeline. The variables are only present when the pipeline is running:
variables:
TF_ROOT: ${CI_PROJECT_DIR}
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/homelab
There's a cache that saves specific files and directories upon running from stage to stage:
cache:
key: homelab
paths:
- ${TF_ROOT}/.terraform*
These are the stages that the pipeline follows:
stages:
- prepare
- validate
- build
- deploy
This declares what to do before any stages are run:
before_script:
- cd ${TF_ROOT}
In the prepare
stage, the tf-init
initializes the Terraform scripts, gets the provider, and sets its backend to GitLab. Variables that aren't declared yet are added as environment variables later.
tf-init:
stage: prepare
script:
- terraform --version
- terraform init -backend-config=address=${BE_REMOTE_STATE_ADDRESS} -backend-config=lock_address=${BE_REMOTE_STATE_ADDRESS}/lock -backend-config=unlock_address=${BE_REMOTE_STATE_ADDRESS}/lock -backend-config=username=${BE_USERNAME} -backend-config=password=${BE_ACCESS_TOKEN} -backend-config=lock_method=POST -backend-config=unlock_method=DELETE -backend-config=retry_wait_min=5
In this part, the CI job tf-validate
and the stage validate
run Terraform to validate that the Terraform scripts are free of syntax errors. Variables not yet declared are added as environment variables later.
tf-validate:
stage: validate
dependencies:
- tf-init
variables:
TF_VAR_OS_AUTH_URL: ${OS_AUTH_URL}
TF_VAR_OS_PASSWORD: ${OS_PASSWORD}
TF_VAR_OS_REGION: ${OS_REGION}
TF_VAR_OS_TENANT: ${OS_TENANT}
TF_VAR_OS_USERNAME: ${OS_USERNAME}
script:
- terraform validate
Next, the CI job tf-build
with the stage build
creates the plan file using terraform plan
and temporarily saves it using the artifacts
tag.
tf-build:
stage: build
dependencies:
- tf-validate
variables:
TF_VAR_OS_AUTH_URL: ${OS_AUTH_URL}
TF_VAR_OS_PASSWORD: ${OS_PASSWORD}
TF_VAR_OS_REGION: ${OS_REGION}
TF_VAR_OS_TENANT: ${OS_TENANT}
TF_VAR_OS_USERNAME: ${OS_USERNAME}
script:
- terraform plan -out "planfile"
artifacts:
paths:
- ${TF_ROOT}/planfile
In the next section, the CI job tf-deploy
with the stage deploy
applies the plan file.
tf-deploy:
stage: deploy
dependencies:
- tf-build
variables:
TF_VAR_OS_AUTH_URL: ${OS_AUTH_URL}
TF_VAR_OS_PASSWORD: ${OS_PASSWORD}
TF_VAR_OS_REGION: ${OS_REGION}
TF_VAR_OS_TENANT: ${OS_TENANT}
TF_VAR_OS_USERNAME: ${OS_USERNAME}
script:
- terraform apply -auto-approve "planfile"
There are variables, so you must declare them in Settings → CI/CD → Variables → Expand.
(AJ Canlas, CC BY-SA 4.0)
Add all the variables required:
BE_ACCESS_TOKEN => GitLab Access Token
BE_REMOTE_STATE_ADDRESS => This was the rendered TF_ADDRESS variable
BE_USERNAME => GitLab username
OS_USERNAME => OpenStack Username
OS_TENANT => OpenStack tenant
OS_PASSWORD => OpenStack User Password
OS_AUTH_URL => Auth URL
OS_REGION => OpenStack Region
So for this example, I used the following:
BE_ACCESS_TOKEN = "wwwwwwwwwwwwwwwwwwwww"
BE_REMOTE_STATE_ADDRESS = https://gitlab.com/api/v4/projects/42580143/terraform/state/homelab
BE_USERNAME = "ajohnsc"
OS_USERNAME = "admin"
OS_TENANT = "admin"
OS_PASSWORD = "YYYYYYYYYYYYYYYYYYYYYY"
OS_AUTH_URL = "http://X.X.X.X:35357/v3"
OS_REGION = "RegionOne"
And it is masked GitLab for its protection.
(AJ Canlas, CC BY-SA 4.0)
The last step is to push the new files to the repository:
$ git add .
$ git commit -m "First commit"
[main (root-commit) e78f701] First commit
10 files changed, 194 insertions(+)
create mode 100644 .gitignore
create mode 100644 .gitlab-ci.yml
create mode 100644 backend.tf
create mode 100644 demo-project-user.tf
create mode 100644 external-network.tf
create mode 100644 flavors.tf
create mode 100644 images.tf
create mode 100644 provider.tf
create mode 100644 routers.tf
create mode 100644 variables.tf
$ git push
Enumerating objects: 12, done.
Counting objects: 100% (12/12), done.
Delta compression using up to 4 threads
Compressing objects: 100% (10/10), done.
Writing objects: 100% (12/12), 2.34 KiB | 479.00 KiB/s, done.
Total 12 (delta 0), reused 0 (delta 0), pack-reused 0
To gitlab.com:testgroup2170/testproject.git
* [new branch] main -> main
View your new pipelines in the CI/CD section of GitLab.
(AJ Canlas, CC BY-SA 4.0)
On the OpenStack side, you can see the resources created by Terraform.
The networks:
(AJ Canlas, CC BY-SA 4.0)
The flavors:
(AJ Canlas, CC BY-SA 4.0)
The images:
(AJ Canlas, CC BY-SA 4.0)
The project:
(AJ Canlas, CC BY-SA 4.0)
The user:
(AJ Canlas, CC BY-SA 4.0)
Terraform has so much potential. Terraform and Ansible are great together. In my next article, I'll demonstrate how Ansible can work with OpenStack
Original article source at: https://opensource.com/
1675734415
In this video, we will create a simple example of an Event-Driven Architecture app. We will use the Finite State Machine pattern where we will change the state every time we get a new event.
Timestamps:
00:00 Intro
01:36 Terraform
17:58 Github Actions
32:36 Internet Gateway
Source Code: https://github.com/scalablescripts/node-terraform
Subscribe: https://www.youtube.com/@ScalableScripts/featured
1675535700
In this Terraform tutorial we will learn about How to Deploy A Function in Google Cloud with Terraform. We have done it through the GCP's command line utility.
Now, we can create and run the same Cloud Function using Terraform.
When we deployed our function using Google's SDK directly, we had to use a command with several flags that could be grouped together in a deploy.sh
script:
gcloud functions deploy $FN_NAME \
--entry-point=$FN_ENTRY_POINT \
--runtime=nodejs16 \
--region=us-central1 \
--trigger-http \
--allow-unauthenticated
In this script, we are specifying exactly how we want our cloud function to be. The flags specify the entrypoint, the runtime, region, trigger and etc.
One could say we are describing how our infrastructure should be. Exactly what we could do with infrastructure as code - in this case, using Terraform!
main.tf
The main.tf
file is the starting point for Terraform to build and manage your infrastructure.
We can start by adding a provider. A provider is a plugin that lets you use the API operations of a specific cloud provider or service, such as AWS, Google Cloud, Azure etc.
provider "google" {
project = "project_name"
region = "us-central1"
}
But let's think about the following scenario: what if you wanted to create a generic template infrastructure that could be reused for different projects other than project_name
?
Here it comes the tfvars
file: a file in which you can put all your environment variables:
google_project = "project_name"
And now you can use this variable in your main.tf
(you also need to add a block telling Terraform you've declared a variable somewhere else):
variable "google_project_name" {}
provider "google" {
project = "${var.project_name}"
region = "us-central1"
}
Now, let's start to add the infrastructure specific to our project!
resource
sA Terraform resource is a unit of Terraform configuration that represents a real-world infrastructure object, such as an EC2 instance, an S3 bucket, or a virtual network. In our case, we are going to represent a cloud function.
We define these resources in blocks, where we describe the desired state of the resource - including properties such as the type, name, and other configuration options.
Understanding how state works is important because, every time Terraform applies changes to the infrastructure of our projects, it updates resources to match the desired state defined in the Terraform configuration.
resource
?Besides the definition previously mentioned, a Terraform resource is - syntatically - a block compound of three parts:
aws_instance
or google_compute_instance
.Alright. We are getting there.
Let's then create the resource
block for our Google Cloud Function!
Each resource block has its specific properties. You can find them in the docs of the Terraform provider you are using. For example, here is the docs for the cloud function we'll be creating:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/cloudfunctions_function
We can start by defining a few things, such as the name
, description
and runtime
:
resource "google_cloudfunctions_function" "my_function" {
name = "my_function"
description = "the function we are going to deploy"
runtime = "nodejs16"
}
Note: you may have noticed that we are repeating
my_function
twice here.
It happens because we have to set a name for the resource - in this case,my_function
, which is translated togoogle_cloudfunctions_function.my_second_fn
in Terraform - and we also have to set the value of the name field of the block, which is going to be used by Google - not Terraform - to identify your function.
However, even though we know these basic properties of our function, where is the source code? In the previous tutorial, Google SDK was able to look into our root directory to find our index.js
file. But here, we only have a Terraform file which specifies our desired state, but no mentions at all about where to find the source code for our function. Let's fix it.
From the docs, we know we have several ways available to specify in our resource block where to find the source code of our function. Let's do it with a storage bucket.
resource "google_storage_bucket" "source_bucket" {
name = "function-bucket"
location = "us-central1"
}
Now we have a bucket, but we also need a bucket object that stores our source code.
resource "google_storage_bucket_object" "source_code" {
name = "object-name"
bucket = google_storage_bucket.bucket.name
source = "path/to/local/file"
}
Note the source field.
Accordingly to the docs, we need to use a .zip
file to store the source code (as well as other files such as package.json
). We can transform our directory into a zip
file using a data "archive_file"
block:
data "archive_file" "my_function_zip" {
type = "zip"
source_dir = "${path.module}/src"
output_path = "${path.module}/src.zip"
}
path.module
is the filesystem path of the module where the expression is placed.
Therefore, now our main.tf
looks like this:
variable "google_project_name" {}
provider "google" {
project = "${var.google_project_name}"
region = "us-central1"
}
data "archive_file" "my_function_zip" {
type = "zip"
source_dir = "${path.module}/src"
output_path = "${path.module}/src.zip"
}
resource "google_cloudfunctions_function" "my_function" {
name = "myFunction"
description = "the function we are going to deploy"
runtime = "nodejs16"
trigger_http = true
ingress_settings = "ALLOW_ALL"
source_archive_bucket = google_storage_bucket.function_source_bucket.name
source_archive_object = google_storage_bucket_object.function_source_bucket_object.name
}
resource "google_storage_bucket" "function_source_bucket" {
name = "function-bucket-1234"
location = "us-central1"
}
resource "google_storage_bucket_object" "function_source_bucket_object" {
name = "function-bucket-object"
bucket = google_storage_bucket.function_source_bucket.name
source = data.archive_file.my_function_zip.output_path
}
We can deploy! But... There's still some things missing.
Using Google SDK we were able to get the URL of our function - since it has a HTTP trigger. It would be good to get this URL righ away.
Also, we needed to set IAM policies to let everyone trigger our function. How to do something similar in Terraform?
We can fix these things by adding two blocks: one which is for IAM policies and another to display the output - an output block.
In Terraform, an output block is used to define the desired values that should be displayed when Terraform applies changes to infrastructure.
If we runterraform plan
right now, we can see some properties that will be known once the infrastructure is created. Andhttps_trigger_url
is exactly what we are looking for!
output "function_url_trigger" {
value = google_cloudfunctions_function.my_function.https_trigger_url
}
resource "google_cloudfunctions_function_iam_member" "my_second_fn_iam" {
cloud_function = google_cloudfunctions_function.my_function.name
member = "allUsers"
role = "roles/cloudfunctions.invoker"
}
Now, we can run terraform apply
and get, as the output, the URL that triggers our function:
And finally, we can trigger it:
Still feel like you missed something? Take a look on the source code for this tutorial: https://github.com/wrongbyte-lab/tf-gcp-tutorial
Original article sourced at: https://dev.to
1675291500
Terraform is a declarative language that can act as a blueprint of the infrastructure you're working on.
After having an OpenStack production and home lab for a while, I can definitively say that provisioning a workload and managing it from an Admin and Tenant perspective is important.
Terraform is an open source Infrastructure-as-Code (IaC) software tool used for provisioning networks, servers, cloud platforms, and more. Terraform is a declarative language that can act as a blueprint of the infrastructure you're working on. You can manage it with Git, and it has a strong GitOps use case.
This article covers the basics of managing an OpenStack cluster using Terraform. I recreate the OpenStack Demo project using Terraform.
I use CentOS as a jump host, where I run Terraform. Based on the official documentation, the first step is to add the Hashicorp repository:
$ sudo dnf config-manager \
--add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
Next, install Terraform:
$ sudo dnf install terraform -y
Verify the installation:
$ terraform –version
If you see a version number in return, you have installed Terraform.
I use CentOS as a jump host, where I run Terraform. Based on the official documentation, the first step is to add the Hashicorp repository:
$ sudo dnf config-manager \
--add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
Next, install Terraform:
$ sudo dnf install terraform -y
Verify the installation:
$ terraform –version
If you see a version number in return, you have installed Terraform.
In Terraform, you need a provider. A provider is a converter that Terraform calls to convert your .tf
into API calls to the platform you are orchestrating.
There are three types of providers: Official, Partner, and Community:
There is a good Community provider for OpenStack in this link. To use this provider, create a .tf
file and call it main.tf
.
$ vi main.tf
Add the following content to main.tf
:
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "1.49.0"
}
}
}
provider "openstack" {
user_name = “OS_USERNAME”
tenant_name = “OS_TENANT”
password = “OS_PASSWORD”
auth_url = “OS_AUTH_URL”
region = “OS_REGION”
}
You need to change the OS_USERNAME, OS_TENANT, OS_PASSWORD, OS_AUTH_URL, and OS_REGION variables for it to work.
OpenStack Admin files focus on provisioning external networks, routers, users, images, tenant profiles, and quotas.
This example provisions flavors, a router connected to an external network, a test image, a tenant profile, and a user.
First, create an AdminTF
directory for the provisioning resources:
$ mkdir AdminTF
$ cd AdminTF
In the main.tf
, add the following:
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "1.49.0"
}
}
}
provider "openstack" {
user_name = “OS_USERNAME”
tenant_name = “admin”
password = “OS_PASSWORD”
auth_url = “OS_AUTH_URL”
region = “OS_REGION”
}
resource "openstack_compute_flavor_v2" "small-flavor" {
name = "small"
ram = "4096"
vcpus = "1"
disk = "0"
flavor_id = "1"
is_public = "true"
}
resource "openstack_compute_flavor_v2" "medium-flavor" {
name = "medium"
ram = "8192"
vcpus = "2"
disk = "0"
flavor_id = "2"
is_public = "true"
}
resource "openstack_compute_flavor_v2" "large-flavor" {
name = "large"
ram = "16384"
vcpus = "4"
disk = "0"
flavor_id = "3"
is_public = "true"
}
resource "openstack_compute_flavor_v2" "xlarge-flavor" {
name = "xlarge"
ram = "32768"
vcpus = "8"
disk = "0"
flavor_id = "4"
is_public = "true"
}
resource "openstack_networking_network_v2" "external-network" {
name = "external-network"
admin_state_up = "true"
external = "true"
segments {
network_type = "flat"
physical_network = "physnet1"
}
}
resource "openstack_networking_subnet_v2" "external-subnet" {
name = "external-subnet"
network_id = openstack_networking_network_v2.external-network.id
cidr = "10.0.0.0/8"
gateway_ip = "10.0.0.1"
dns_nameservers = ["10.0.0.254", "10.0.0.253"]
allocation_pool {
start = "10.0.0.1"
end = "10.0.254.254"
}
}
resource "openstack_networking_router_v2" "external-router" {
name = "external-router"
admin_state_up = true
external_network_id = openstack_networking_network_v2.external-network.id
}
resource "openstack_images_image_v2" "cirros" {
name = "cirros"
image_source_url = "https://download.cirros-cloud.net/0.6.1/cirros-0.6.1-x86_64-disk.img"
container_format = "bare"
disk_format = "qcow2"
properties = {
key = "value"
}
}
resource "openstack_identity_project_v3" "demo-project" {
name = "Demo"
}
resource "openstack_identity_user_v3" "demo-user" {
name = "demo-user"
default_project_id = openstack_identity_project_v3.demo-project.id
password = "demo"
}
As a Tenant, you usually create VMs. You also create network and security groups for the VMs.
This example uses the user created above by the Admin file.
First, create a TenantTF
directory for Tenant-related provisioning:
$ mkdir TenantTF
$ cd TenantTF
In the main.tf
, add the following:
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "1.49.0"
}
}
}
provider "openstack" {
user_name = “demo-user”
tenant_name = “demo”
password = “demo”
auth_url = “OS_AUTH_URL”
region = “OS_REGION”
}
resource "openstack_compute_keypair_v2" "demo-keypair" {
name = "demo-key"
public_key = "ssh-rsa ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ"
}
resource "openstack_networking_network_v2" "demo-network" {
name = "demo-network"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "demo-subnet" {
network_id = openstack_networking_network_v2.demo-network.id
name = "demo-subnet"
cidr = "192.168.26.0/24"
}
resource "openstack_networking_router_interface_v2" "demo-router-interface" {
router_id = “XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX”
subnet_id = openstack_networking_subnet_v2.demo-subnet.id
}
resource "openstack_compute_instance_v2" "demo-instance" {
name = "demo"
image_id = "YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY"
flavor_id = "3"
key_pair = "demo-key"
security_groups = ["default"]
metadata = {
this = "that"
}
network {
name = "demo-network"
}
}
After creating the Terraform files, you need to initialize Terraform.
For Admin:
$ cd AdminTF
$ terraform init
$ terraform fmt
For Tenants:
$ cd TenantTF
$ terraform init
$ terraform fmt
Command explanation:
terraform init
downloads the provider from the registry to use in provisioning this project.terraform fmt
formats the files for use in repositories.Next, create a plan for you to see what resources will be created.
For Admin:
$ cd AdminTF
$ terraform validate
$ terraform plan
For Tenants:
$ cd TenantTF
$ terraform validate
$ terraform plan
Command explanation:
terraform validate
validates whether the .tf
syntax is correct.terraform plan
creates a plan file in the cache where all managed resources can be tracked in creation and destroy.To deploy the resources, use the terraform apply
command. This command applies all resource states in the plan file.
For Admin:
$ cd AdminTF
$ terraform apply
For Tenants:
$ cd TenantTF
$ terraform apply
Previously, I wrote an article on deploying a minimal OpenStack cluster on a Raspberry Pi. You can discover how to have more detailed Terraform and Ansible configurations and implement some CI/CD with GitLab.
Original article source at: https://opensource.com/
1673574410
This DevOps Tools Full Course will help you understand and learn the fundamentals concepts of various tools used in the DevOps lifecycle. You'll learn: What is DevOps? DevOps Lifecycle and DevOps tools, Git, Jenkins, Docker, Kubernetes, Puppet, Ansible, Terraform, Prometheus, Grafana, Selenium, Nagios, Azure DevOps, AWS DevOps, Azure DevOps vs AWSDevOps, DevOps Interview questions and answer, and more
This Edureka DevOps Tools Full Course video will help you understand and learn the fundamentals concepts of various tools used in the DevOps lifecycle. This DevOps training video is ideal for both beginners as well as professionals who want to master the fundamentals of the top DevOps tools. Below are the topics covered in this DevOps Tools Full Course Tutorial:
#devops #git #jenkins #docker #kubernetes #puppet #ansible #terraform #prometheus #grafana #selenium #nagios #azure #aws
1673536824
Terraform module, which creates almost all supported AWS Lambda resources as well as taking care of building and packaging of required Lambda dependencies for functions and layers.
This Terraform module is the part of serverless.tf framework, which aims to simplify all operations when working with the serverless in Terraform:
serverless.tf
modules like HTTP API Gateway (see examples there).module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
}
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "lambda-with-layer"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
publish = true
source_path = "../src/lambda-function1"
store_on_s3 = true
s3_bucket = "my-bucket-id-with-lambda-builds"
layers = [
module.lambda_layer_s3.lambda_layer_arn,
]
environment_variables = {
Serverless = "Terraform"
}
tags = {
Module = "lambda-with-layer"
}
}
module "lambda_layer_s3" {
source = "terraform-aws-modules/lambda/aws"
create_layer = true
layer_name = "lambda-layer-s3"
description = "My amazing lambda layer (deployed from S3)"
compatible_runtimes = ["python3.8"]
source_path = "../src/lambda-layer"
store_on_s3 = true
s3_bucket = "my-bucket-id-with-lambda-builds"
}
module "lambda_function_existing_package_local" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-existing-package-local"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
create_package = false
local_existing_package = "../existing_package.zip"
}
If you want to manage function code and infrastructure resources (such as IAM permissions, policies, events, etc) in separate flows (e.g., different repositories, teams, CI/CD pipelines).
Disable source code tracking to turn off deployments (and rollbacks) using the module by setting ignore_source_code_hash = true
and deploy a dummy function.
When the infrastructure and the dummy function is deployed, you can use external tool to update the source code of the function (eg, using AWS CLI) and keep using this module via Terraform to manage the infrastructure.
Be aware that changes in local_existing_package
value may trigger deployment via Terraform.
module "lambda_function_externally_managed_package" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-externally-managed-package"
description = "My lambda function code is deployed separately"
handler = "index.lambda_handler"
runtime = "python3.8"
create_package = false
local_existing_package = "./lambda_functions/code.zip"
ignore_source_code_hash = true
}
Note that this module does not copy prebuilt packages into S3 bucket. This module can only store packages it builds locally and in S3 bucket.
locals {
my_function_source = "../path/to/package.zip"
}
resource "aws_s3_bucket" "builds" {
bucket = "my-builds"
acl = "private"
}
resource "aws_s3_object" "my_function" {
bucket = aws_s3_bucket.builds.id
key = "${filemd5(local.my_function_source)}.zip"
source = local.my_function_source
}
module "lambda_function_existing_package_s3" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-existing-package-local"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
create_package = false
s3_existing_package = {
bucket = aws_s3_bucket.builds.id
key = aws_s3_object.my_function.id
}
}
module "lambda_function_container_image" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-existing-package-local"
description = "My awesome lambda function"
create_package = false
image_uri = "132367819851.dkr.ecr.eu-west-1.amazonaws.com/complete-cow:1.0"
package_type = "Image"
}
module "lambda_layer_local" {
source = "terraform-aws-modules/lambda/aws"
create_layer = true
layer_name = "my-layer-local"
description = "My amazing lambda layer (deployed from local)"
compatible_runtimes = ["python3.8"]
source_path = "../fixtures/python3.8-app1"
}
module "lambda_layer_s3" {
source = "terraform-aws-modules/lambda/aws"
create_layer = true
layer_name = "my-layer-s3"
description = "My amazing lambda layer (deployed from S3)"
compatible_runtimes = ["python3.8"]
source_path = "../fixtures/python3.8-app1"
store_on_s3 = true
s3_bucket = "my-bucket-id-with-lambda-builds"
}
Make sure, you deploy Lambda@Edge functions into US East (N. Virginia) region (us-east-1
). See Requirements and Restrictions on Lambda Functions.
module "lambda_at_edge" {
source = "terraform-aws-modules/lambda/aws"
lambda_at_edge = true
function_name = "my-lambda-at-edge"
description = "My awesome lambda@edge function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../fixtures/python3.8-app1"
tags = {
Module = "lambda-at-edge"
}
}
module "lambda_function_in_vpc" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-in-vpc"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../fixtures/python3.8-app1"
vpc_subnet_ids = module.vpc.intra_subnets
vpc_security_group_ids = [module.vpc.default_security_group_id]
attach_network_policy = true
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.10.0.0/16"
# Specify at least one of: intra_subnets, private_subnets, or public_subnets
azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
intra_subnets = ["10.10.101.0/24", "10.10.102.0/24", "10.10.103.0/24"]
}
There are 6 supported ways to attach IAM policies to IAM role used by Lambda Function:
policy_json
- JSON string or heredoc, when attach_policy_json = true
.policy_jsons
- List of JSON strings or heredoc, when attach_policy_jsons = true
and number_of_policy_jsons > 0
.policy
- ARN of existing IAM policy, when attach_policy = true
.policies
- List of ARNs of existing IAM policies, when attach_policies = true
and number_of_policies > 0
.policy_statements
- Map of maps to define IAM statements which will be generated as IAM policy. Requires attach_policy_statements = true
. See examples/complete
for more information.assume_role_policy_statements
- Map of maps to define IAM statements which will be generated as IAM policy for assuming Lambda Function role (trust relationship). See examples/complete
for more information.Lambda Permissions should be specified to allow certain resources to invoke Lambda Function.
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
# ...omitted for brevity
allowed_triggers = {
APIGatewayAny = {
service = "apigateway"
source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/*/*/*"
},
APIGatewayDevPost = {
service = "apigateway"
source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/dev/POST/*"
},
OneRule = {
principal = "events.amazonaws.com"
source_arn = "arn:aws:events:eu-west-1:135367859851:rule/RunDaily"
}
}
}
Sometimes you need to have a way to create resources conditionally but Terraform does not allow usage of count
inside module
block, so the solution is to specify create
arguments.
module "lambda" {
source = "terraform-aws-modules/lambda/aws"
create = false # to disable all resources
create_package = false # to control build package process
create_function = false # to control creation of the Lambda Function and related resources
create_layer = false # to control creation of the Lambda Layer and related resources
create_role = false # to control creation of the IAM role and policies required for Lambda Function
attach_cloudwatch_logs_policy = false
attach_dead_letter_policy = false
attach_network_policy = false
attach_tracing_policy = false
attach_async_event_policy = false
# ... omitted
}
This is one of the most complicated part done by the module and normally you don't have to know internals.
package.py
is Python script which does it. Make sure, Python 3.6 or newer is installed. The main functions of the script are to generate a filename of zip-archive based on the content of the files, verify if zip-archive has been already created, and create zip-archive only when it is necessary (during apply
, not plan
).
Hash of zip-archive created with the same content of the files is always identical which prevents unnecessary force-updates of the Lambda resources unless content modifies. If you need to have different filenames for the same content you can specify extra string argument hash_extra
.
When calling this module multiple times in one execution to create packages with the same source_path
, zip-archives will be corrupted due to concurrent writes into the same file. There are two solutions - set different values for hash_extra
to create different archives, or create package once outside (using this module) and then pass local_existing_package
argument to create other Lambda resources.
Building and packaging has been historically hard to debug (especially with Terraform), so we made an effort to make it easier for user to see debug info. There are 3 different debug levels: DEBUG
- to see only what is happening during planning phase and how a zip file content filtering in case of applied patterns, DEBUG2
- to see more logging output, DEBUG3
- to see all logging values, DUMP_ENV
- to see all logging values and env variables (be careful sharing your env variables as they may contain secrets!).
User can specify debug level like this:
export TF_LAMBDA_PACKAGE_LOG_LEVEL=DEBUG2
terraform apply
User can enable comments in heredoc strings in patterns
which can be helpful in some situations. To do this set this environment variable:
export TF_LAMBDA_PACKAGE_PATTERN_COMMENTS=true
terraform apply
You can specify source_path
in a variety of ways to achieve desired flexibility when building deployment packages locally or in Docker. You can use absolute or relative paths. If you have placed terraform files in subdirectories, note that relative paths are specified from the directory where terraform plan
is run and not the location of your terraform file.
Note that, when building locally, files are not copying anywhere from the source directories when making packages, we use fast Python regular expressions to find matching files and directories, which makes packaging very fast and easy to understand.
When source_path
is set to a string, the content of that path will be used to create deployment package as-is:
source_path = "src/function1"
When source_path
is set to a list of directories the content of each will be taken and one archive will be created.
This is the most complete way of creating a deployment package from multiple sources with multiple dependencies. This example is showing some of the available options (see examples/build-package for more):
source_path = [
"src/main-source",
"src/another-source/index.py",
{
path = "src/function1-dep",
patterns = [
"!.*/.*\\.txt", # Skip all txt files recursively
]
}, {
path = "src/python3.8-app1",
pip_requirements = true,
pip_tmp_dir = "/tmp/dir/location"
prefix_in_zip = "foo/bar1",
}, {
path = "src/python3.8-app2",
pip_requirements = "requirements-large.txt",
patterns = [
"!vendor/colorful-0.5.4.dist-info/RECORD",
"!vendor/colorful-.+.dist-info/.*",
"!vendor/colorful/__pycache__/?.*",
]
}, {
path = "src/nodejs14.x-app1",
npm_requirements = true,
npm_tmp_dir = "/tmp/dir/location"
prefix_in_zip = "foo/bar1",
}, {
path = "src/python3.8-app3",
commands = [
"npm install",
":zip"
],
patterns = [
"!.*/.*\\.txt", # Skip all txt files recursively
"node_modules/.+", # Include all node_modules
],
}, {
path = "src/python3.8-app3",
commands = ["go build"],
patterns = <<END
bin/.*
abc/def/.*
END
}
]
Few notes:
python
or nodejs
, the build process will automatically build python and nodejs dependencies if requirements.txt
or package.json
file will be found in the source folder. If you want to customize this behavior, please use the object notation as explained below.path
are optional.patterns
- List of Python regex filenames should satisfy. Default value is "include everything" which is equal to patterns = [".*"]
. This can also be specified as multiline heredoc string (no comments allowed). Some examples of valid patterns: !.*/.*\.txt # Filter all txt files recursively
node_modules/.* # Include empty dir or with a content if it exists
node_modules/.+ # Include full non empty node_modules dir with its content
node_modules/ # Include node_modules itself without its content
# It's also a way to include an empty dir if it exists
node_modules # Include a file or an existing dir only
!abc/.* # Filter out everything in an abc folder
abc/def/.* # Re-include everything in abc/def sub folder
!abc/def/hgk/.* # Filter out again in abc/def/hgk sub folder
commands
- List of commands to run. If specified, this argument overrides pip_requirements
and npm_requirements
.:zip [source] [destination]
is a special command which creates content of current working directory (first argument) and places it inside of path (second argument).pip_requirements
- Controls whether to execute pip install
. Set to false
to disable this feature, true
to run pip install
with requirements.txt
found in path
. Or set to another filename which you want to use instead.pip_tmp_dir
- Set the base directory to make the temporary directory for pip installs. Can be useful for Docker in Docker builds.npm_requirements
- Controls whether to execute npm install
. Set to false
to disable this feature, true
to run npm install
with package.json
found in path
. Or set to another filename which you want to use instead.npm_tmp_dir
- Set the base directory to make the temporary directory for npm installs. Can be useful for Docker in Docker builds.prefix_in_zip
- If specified, will be used as a prefix inside zip-archive. By default, everything installs into the root of zip-archive.If your Lambda Function or Layer uses some dependencies you can build them in Docker and have them included into deployment package. Here is how you can do it:
build_in_docker = true
docker_file = "src/python3.8-app1/docker/Dockerfile"
docker_build_root = "src/python3.8-app1/docker"
docker_image = "public.ecr.aws/sam/build-python3.8"
runtime = "python3.8" # Setting runtime is required when building package in Docker and Lambda Layer resource.
Using this module you can install dependencies from private hosts. To do this, you need for forward SSH agent:
docker_with_ssh_agent = true
Note that by default, the docker_image
used comes from the registry public.ecr.aws/sam/
, and will be based on the runtime
that you specify. In other words, if you specify a runtime of python3.8
and do not specify docker_image
, then the docker_image
will resolve to public.ecr.aws/sam/build-python3.8
. This ensures that by default the runtime
is available in the docker container.
If you override docker_image
, be sure to keep the image in sync with your runtime
. During the plan phase, when using docker, there is no check that the runtime
is available to build the package. That means that if you use an image that does not have the runtime, the plan will still succeed, but then the apply will fail.
To add flexibility when building in docker, you can pass any number of additional options that you require (see Docker run reference for available options):
docker_additional_options = [
"-e", "MY_ENV_VAR='My environment variable value'",
"-v", "/local:/docker-vol",
]
To override the docker entrypoint when building in docker, set docker_entrypoint
:
docker_entrypoint = "/entrypoint/entrypoint.sh"
The entrypoint must map to a path within your container, so you need to either build your own image that contains the entrypoint or map it to a file on the host by mounting a volume (see Passing additional Docker options).
By default, this module creates deployment package and uses it to create or update Lambda Function or Lambda Layer.
Sometimes, you may want to separate build of deployment package (eg, to compile and install dependencies) from the deployment of a package into two separate steps.
When creating archive locally outside of this module you need to set create_package = false
and then argument local_existing_package = "existing_package.zip"
. Alternatively, you may prefer to keep your deployment packages into S3 bucket and provide a reference to them like this:
create_package = false
s3_existing_package = {
bucket = "my-bucket-with-lambda-builds"
key = "existing_package.zip"
}
This can be implemented in two steps: download file locally using CURL, and pass path to deployment package as local_existing_package
argument.
locals {
package_url = "https://raw.githubusercontent.com/terraform-aws-modules/terraform-aws-lambda/master/examples/fixtures/python3.8-zip/existing_package.zip"
downloaded = "downloaded_package_${md5(local.package_url)}.zip"
}
resource "null_resource" "download_package" {
triggers = {
downloaded = local.downloaded
}
provisioner "local-exec" {
command = "curl -L -o ${local.downloaded} ${local.package_url}"
}
}
data "null_data_source" "downloaded_package" {
inputs = {
id = null_resource.download_package.id
filename = local.downloaded
}
}
module "lambda_function_existing_package_from_remote_url" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-existing-package-local"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
create_package = false
local_existing_package = data.null_data_source.downloaded_package.outputs["filename"]
}
AWS SAM CLI is an open source tool that help the developers to initiate, build, test, and deploy serverless applications. Currently, SAM CLI tool only supports CFN applications, but SAM CLI team is working on a feature to extend the testing capabilities to support terraform applications (check this Github issue to be updated about the incoming releases, and features included in each release for the Terraform support feature).
SAM CLI provides two ways of testing: local testing and testing on-cloud (Accelerate).
Using SAM CLI, you can invoke the lambda functions defined in the terraform application locally using the sam local invoke command, providing the function terraform address, or function name, and to set the hook-name
to terraform
to tell SAM CLI that the underlying project is a terraform application.
You can execute the sam local invoke
command from your terraform application root directory as following:
sam local invoke --hook-name terraform module.hello_world_function.aws_lambda_function.this[0]
You can also pass an event to your lambda function, or overwrite its environment variables. Check here for more information.
You can also invoke your lambda function in debugging mode, and step-through your lambda function source code locally in your preferred editor. Check here for more information.
You can use AWS SAM CLI to quickly test your application on your AWS development account. Using SAM Accelerate, you will be able to develop your lambda functions locally, and once you save your updates, SAM CLI will update your development account with the updated Lambda functions. So, you can test it on cloud, and if there is any bug, you can quickly update the code, and SAM CLI will take care of pushing it to the cloud. Check here for more information about SAM Accelerate.
You can execute the sam sync
command from your terraform application root directory as following:
sam sync --hook-name terraform --watch
Typically, Lambda Function resource updates when source code changes. If publish = true
is specified a new Lambda Function version will also be created.
Published Lambda Function can be invoked using either by version number or using $LATEST
. This is the simplest way of deployment which does not required any additional tool or service.
In order to do controlled deployments (rolling, canary, rollbacks) of Lambda Functions we need to use Lambda Function aliases.
In simple terms, Lambda alias is like a pointer to either one version of Lambda Function (when deployment complete), or to two weighted versions of Lambda Function (during rolling or canary deployment).
One Lambda Function can be used in multiple aliases. Using aliases gives large control of which version deployed when having multiple environments.
There is alias module, which simplifies working with alias (create, manage configurations, updates, etc). See examples/alias for various use-cases how aliases can be configured and used.
There is deploy module, which creates required resources to do deployments using AWS CodeDeploy. It also creates the deployment, and wait for completion. See examples/deploy for complete end-to-end build/update/deploy process.
Terraform Cloud, Terraform Enterprise, and many other SaaS for running Terraform do not have Python pre-installed on the workers. You will need to provide an alternative Docker image with Python installed to be able to use this module there.
Q1: Why deployment package not recreating every time I change something? Or why deployment package is being recreated every time but content has not been changed?
Answer: There can be several reasons related to concurrent executions, or to content hash. Sometimes, changes has happened inside of dependency which is not used in calculating content hash. Or multiple packages are creating at the same time from the same sources. You can force it by setting value of
hash_extra
to distinct values.
Q2: How to force recreate deployment package?
Answer: Delete an existing zip-archive from
builds
directory, or make a change in your source code. If there is no zip-archive for the current content hash, it will be recreated duringterraform apply
.
Q3: null_resource.archive[0] must be replaced
Answer: This probably mean that zip-archive has been deployed, but is currently absent locally, and it has to be recreated locally. When you run into this issue during CI/CD process (where workspace is clean) or from multiple workspaces, you can set environment variable
TF_RECREATE_MISSING_LAMBDA_PACKAGE=false
or passrecreate_missing_package = false
as a parameter to the module and runterraform apply
.
Q4: What does this error mean - "We currently do not support adding policies for $LATEST."
?
Answer: When the Lambda function is created with
publish = true
the new version is automatically increased and a qualified identifier (version number) becomes available and will be used when setting Lambda permissions.When
publish = false
(default), only unqualified identifier ($LATEST
) is available which leads to the error.The solution is to either disable the creation of Lambda permissions for the current version by setting
create_current_version_allowed_triggers = false
, or to enable publish of Lambda function (publish = true
).
Examples by the users of this module
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
allowed_triggers | Map of allowed triggers to create Lambda permissions | map(any) | {} | no |
architectures | Instruction set architecture for your Lambda function. Valid values are ["x86_64"] and ["arm64"]. | list(string) | null | no |
artifacts_dir | Directory name where artifacts should be stored | string | "builds" | no |
assume_role_policy_statements | Map of dynamic policy statements for assuming Lambda Function role (trust relationship) | any | {} | no |
attach_async_event_policy | Controls whether async event policy should be added to IAM role for Lambda Function | bool | false | no |
attach_cloudwatch_logs_policy | Controls whether CloudWatch Logs policy should be added to IAM role for Lambda Function | bool | true | no |
attach_dead_letter_policy | Controls whether SNS/SQS dead letter notification policy should be added to IAM role for Lambda Function | bool | false | no |
attach_network_policy | Controls whether VPC/network policy should be added to IAM role for Lambda Function | bool | false | no |
attach_policies | Controls whether list of policies should be added to IAM role for Lambda Function | bool | false | no |
attach_policy | Controls whether policy should be added to IAM role for Lambda Function | bool | false | no |
attach_policy_json | Controls whether policy_json should be added to IAM role for Lambda Function | bool | false | no |
attach_policy_jsons | Controls whether policy_jsons should be added to IAM role for Lambda Function | bool | false | no |
attach_policy_statements | Controls whether policy_statements should be added to IAM role for Lambda Function | bool | false | no |
attach_tracing_policy | Controls whether X-Ray tracing policy should be added to IAM role for Lambda Function | bool | false | no |
authorization_type | The type of authentication that the Lambda Function URL uses. Set to 'AWS_IAM' to restrict access to authenticated IAM users only. Set to 'NONE' to bypass IAM authentication and create a public endpoint. | string | "NONE" | no |
build_in_docker | Whether to build dependencies in Docker | bool | false | no |
cloudwatch_logs_kms_key_id | The ARN of the KMS Key to use when encrypting log data. | string | null | no |
cloudwatch_logs_retention_in_days | Specifies the number of days you want to retain log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | number | null | no |
cloudwatch_logs_tags | A map of tags to assign to the resource. | map(string) | {} | no |
code_signing_config_arn | Amazon Resource Name (ARN) for a Code Signing Configuration | string | null | no |
compatible_architectures | A list of Architectures Lambda layer is compatible with. Currently x86_64 and arm64 can be specified. | list(string) | null | no |
compatible_runtimes | A list of Runtimes this layer is compatible with. Up to 5 runtimes can be specified. | list(string) | [] | no |
cors | CORS settings to be used by the Lambda Function URL | any | {} | no |
create | Controls whether resources should be created | bool | true | no |
create_async_event_config | Controls whether async event configuration for Lambda Function/Alias should be created | bool | false | no |
create_current_version_allowed_triggers | Whether to allow triggers on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources) | bool | true | no |
create_current_version_async_event_config | Whether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources) | bool | true | no |
create_function | Controls whether Lambda Function resource should be created | bool | true | no |
create_lambda_function_url | Controls whether the Lambda Function URL resource should be created | bool | false | no |
create_layer | Controls whether Lambda Layer resource should be created | bool | false | no |
create_package | Controls whether Lambda package should be created | bool | true | no |
create_role | Controls whether IAM role for Lambda Function should be created | bool | true | no |
create_unqualified_alias_allowed_triggers | Whether to allow triggers on unqualified alias pointing to $LATEST version | bool | true | no |
create_unqualified_alias_async_event_config | Whether to allow async event configuration on unqualified alias pointing to $LATEST version | bool | true | no |
create_unqualified_alias_lambda_function_url | Whether to use unqualified alias pointing to $LATEST version in Lambda Function URL | bool | true | no |
dead_letter_target_arn | The ARN of an SNS topic or SQS queue to notify when an invocation fails. | string | null | no |
description | Description of your Lambda Function (or Layer) | string | "" | no |
destination_on_failure | Amazon Resource Name (ARN) of the destination resource for failed asynchronous invocations | string | null | no |
destination_on_success | Amazon Resource Name (ARN) of the destination resource for successful asynchronous invocations | string | null | no |
docker_additional_options | Additional options to pass to the docker run command (e.g. to set environment variables, volumes, etc.) | list(string) | [] | no |
docker_build_root | Root dir where to build in Docker | string | "" | no |
docker_entrypoint | Path to the Docker entrypoint to use | string | null | no |
docker_file | Path to a Dockerfile when building in Docker | string | "" | no |
docker_image | Docker image to use for the build | string | "" | no |
docker_pip_cache | Whether to mount a shared pip cache folder into docker environment or not | any | null | no |
docker_with_ssh_agent | Whether to pass SSH_AUTH_SOCK into docker environment or not | bool | false | no |
environment_variables | A map that defines environment variables for the Lambda Function. | map(string) | {} | no |
ephemeral_storage_size | Amount of ephemeral storage (/tmp) in MB your Lambda Function can use at runtime. Valid value between 512 MB to 10,240 MB (10 GB). | number | 512 | no |
event_source_mapping | Map of event source mapping | any | {} | no |
file_system_arn | The Amazon Resource Name (ARN) of the Amazon EFS Access Point that provides access to the file system. | string | null | no |
file_system_local_mount_path | The path where the function can access the file system, starting with /mnt/. | string | null | no |
function_name | A unique name for your Lambda Function | string | "" | no |
handler | Lambda Function entrypoint in your code | string | "" | no |
hash_extra | The string to add into hashing function. Useful when building same source path for different functions. | string | "" | no |
ignore_source_code_hash | Whether to ignore changes to the function's source code hash. Set to true if you manage infrastructure and code deployments separately. | bool | false | no |
image_config_command | The CMD for the docker image | list(string) | [] | no |
image_config_entry_point | The ENTRYPOINT for the docker image | list(string) | [] | no |
image_config_working_directory | The working directory for the docker image | string | null | no |
image_uri | The ECR image URI containing the function's deployment package. | string | null | no |
kms_key_arn | The ARN of KMS key to use by your Lambda Function | string | null | no |
lambda_at_edge | Set this to true if using Lambda@Edge, to enable publishing, limit the timeout, and allow edgelambda.amazonaws.com to invoke the function | bool | false | no |
lambda_role | IAM role ARN attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to. See Lambda Permission Model for more details. | string | "" | no |
layer_name | Name of Lambda Layer to create | string | "" | no |
layer_skip_destroy | Whether to retain the old version of a previously deployed Lambda Layer. | bool | false | no |
layers | List of Lambda Layer Version ARNs (maximum of 5) to attach to your Lambda Function. | list(string) | null | no |
license_info | License info for your Lambda Layer. Eg, MIT or full url of a license. | string | "" | no |
local_existing_package | The absolute path to an existing zip-file to use | string | null | no |
maximum_event_age_in_seconds | Maximum age of a request that Lambda sends to a function for processing in seconds. Valid values between 60 and 21600. | number | null | no |
maximum_retry_attempts | Maximum number of times to retry when the function returns an error. Valid values between 0 and 2. Defaults to 2. | number | null | no |
memory_size | Amount of memory in MB your Lambda Function can use at runtime. Valid value between 128 MB to 10,240 MB (10 GB), in 64 MB increments. | number | 128 | no |
number_of_policies | Number of policies to attach to IAM role for Lambda Function | number | 0 | no |
number_of_policy_jsons | Number of policies JSON to attach to IAM role for Lambda Function | number | 0 | no |
package_type | The Lambda deployment package type. Valid options: Zip or Image | string | "Zip" | no |
policies | List of policy statements ARN to attach to Lambda Function role | list(string) | [] | no |
policy | An additional policy document ARN to attach to the Lambda Function role | string | null | no |
policy_json | An additional policy document as JSON to attach to the Lambda Function role | string | null | no |
policy_jsons | List of additional policy documents as JSON to attach to Lambda Function role | list(string) | [] | no |
policy_name | IAM policy name. It override the default value, which is the same as role_name | string | null | no |
policy_path | Path of policies to that should be added to IAM role for Lambda Function | string | null | no |
policy_statements | Map of dynamic policy statements to attach to Lambda Function role | any | {} | no |
provisioned_concurrent_executions | Amount of capacity to allocate. Set to 1 or greater to enable, or set to 0 to disable provisioned concurrency. | number | -1 | no |
publish | Whether to publish creation/change as new Lambda Function Version. | bool | false | no |
putin_khuylo | Do you agree that Putin doesn't respect Ukrainian sovereignty and territorial integrity? More info: https://en.wikipedia.org/wiki/Putin_khuylo! | bool | true | no |
recreate_missing_package | Whether to recreate missing Lambda package if it is missing locally or not | bool | true | no |
reserved_concurrent_executions | The amount of reserved concurrent executions for this Lambda Function. A value of 0 disables Lambda Function from being triggered and -1 removes any concurrency limitations. Defaults to Unreserved Concurrency Limits -1. | number | -1 | no |
role_description | Description of IAM role to use for Lambda Function | string | null | no |
role_force_detach_policies | Specifies to force detaching any policies the IAM role has before destroying it. | bool | true | no |
role_name | Name of IAM role to use for Lambda Function | string | null | no |
role_path | Path of IAM role to use for Lambda Function | string | null | no |
role_permissions_boundary | The ARN of the policy that is used to set the permissions boundary for the IAM role used by Lambda Function | string | null | no |
role_tags | A map of tags to assign to IAM role | map(string) | {} | no |
runtime | Lambda Function runtime | string | "" | no |
s3_acl | The canned ACL to apply. Valid values are private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, and bucket-owner-full-control. Defaults to private. | string | "private" | no |
s3_bucket | S3 bucket to store artifacts | string | null | no |
s3_existing_package | The S3 bucket object with keys bucket, key, version pointing to an existing zip-file to use | map(string) | null | no |
s3_object_storage_class | Specifies the desired Storage Class for the artifact uploaded to S3. Can be either STANDARD, REDUCED_REDUNDANCY, ONEZONE_IA, INTELLIGENT_TIERING, or STANDARD_IA. | string | "ONEZONE_IA" | no |
s3_object_tags | A map of tags to assign to S3 bucket object. | map(string) | {} | no |
s3_object_tags_only | Set to true to not merge tags with s3_object_tags. Useful to avoid breaching S3 Object 10 tag limit. | bool | false | no |
s3_prefix | Directory name where artifacts should be stored in the S3 bucket. If unset, the path from artifacts_dir is used | string | null | no |
s3_server_side_encryption | Specifies server-side encryption of the object in S3. Valid values are "AES256" and "aws:kms". | string | null | no |
source_path | The absolute path to a local file or directory containing your Lambda source code | any | null | no |
store_on_s3 | Whether to store produced artifacts on S3 or locally. | bool | false | no |
tags | A map of tags to assign to resources. | map(string) | {} | no |
timeout | The amount of time your Lambda Function has to run in seconds. | number | 3 | no |
tracing_mode | Tracing mode of the Lambda Function. Valid value can be either PassThrough or Active. | string | null | no |
trusted_entities | List of additional trusted entities for assuming Lambda Function role (trust relationship) | any | [] | no |
use_existing_cloudwatch_log_group | Whether to use an existing CloudWatch log group or create new | bool | false | no |
vpc_security_group_ids | List of security group ids when Lambda Function should run in the VPC. | list(string) | null | no |
vpc_subnet_ids | List of subnet ids when Lambda Function should run in the VPC. Usually private or intra subnets. | list(string) | null | no |
Name | Description |
---|---|
lambda_cloudwatch_log_group_arn | The ARN of the Cloudwatch Log Group |
lambda_cloudwatch_log_group_name | The name of the Cloudwatch Log Group |
lambda_event_source_mapping_function_arn | The the ARN of the Lambda function the event source mapping is sending events to |
lambda_event_source_mapping_state | The state of the event source mapping |
lambda_event_source_mapping_state_transition_reason | The reason the event source mapping is in its current state |
lambda_event_source_mapping_uuid | The UUID of the created event source mapping |
lambda_function_arn | The ARN of the Lambda Function |
lambda_function_arn_static | The static ARN of the Lambda Function. Use this to avoid cycle errors between resources (e.g., Step Functions) |
lambda_function_invoke_arn | The Invoke ARN of the Lambda Function |
lambda_function_kms_key_arn | The ARN for the KMS encryption key of Lambda Function |
lambda_function_last_modified | The date Lambda Function resource was last modified |
lambda_function_name | The name of the Lambda Function |
lambda_function_qualified_arn | The ARN identifying your Lambda Function Version |
lambda_function_signing_job_arn | ARN of the signing job |
lambda_function_signing_profile_version_arn | ARN of the signing profile version |
lambda_function_source_code_hash | Base64-encoded representation of raw SHA-256 sum of the zip file |
lambda_function_source_code_size | The size in bytes of the function .zip file |
lambda_function_url | The URL of the Lambda Function URL |
lambda_function_url_id | The Lambda Function URL generated id |
lambda_function_version | Latest published version of Lambda Function |
lambda_layer_arn | The ARN of the Lambda Layer with version |
lambda_layer_created_date | The date Lambda Layer resource was created |
lambda_layer_layer_arn | The ARN of the Lambda Layer without version |
lambda_layer_source_code_size | The size in bytes of the Lambda Layer .zip file |
lambda_layer_version | The Lambda Layer version |
lambda_role_arn | The ARN of the IAM role created for the Lambda Function |
lambda_role_name | The name of the IAM role created for the Lambda Function |
lambda_role_unique_id | The unique id of the IAM role created for the Lambda Function |
local_filename | The filename of zip archive deployed (if deployment was from local) |
s3_object | The map with S3 object data of zip archive deployed (if deployment was from S3) |
During development involving modifying python files, use tox to run unit tests:
tox
This will try to run unit tests which each supported python version, reporting errors for python versions which are not installed locally.
If you only want to test against your main python version:
tox -e py
You can also pass additional positional arguments to pytest which is used to run test, e.g. to make it verbose:
tox -e py -- -vvv
Author: Terraform-aws-modules
Source Code: https://github.com/terraform-aws-modules/terraform-aws-lambda
License: Apache-2.0 license
1672369020
This is a little Go app which generates a dynamic Ansible inventory from a Terraform state file. It allows one to spawn a bunch of instances with Terraform, then (re-)provision them with Ansible.
The following providers are supported:
It's very simple to add support for new providers. See pull requests with the provider label for examples.
Help Wanted 🙋
This library is stable, but I've been neglecting it somewhat on account of no longer using Ansible at work. Please drop me a line if you'd be interested in helping to maintain this tool.
Installation
On OSX, install it with Homebrew:
brew install terraform-inventory
Alternatively, you can download a release suitable for your platform and unzip it. Make sure the terraform-inventory
binary is executable, and you're ready to go.
If you are using remote state (or if your state file happens to be named terraform.tfstate
), cd
to it and run:
ansible-playbook --inventory-file=/path/to/terraform-inventory deploy/playbook.yml
This will provide the resource names and IP addresses of any instances found in the state file to Ansible, which can then be used as hosts patterns in your playbooks. For example, given for the following Terraform config:
resource "digitalocean_droplet" "my_web_server" {
image = "centos-7-0-x64"
name = "web-1"
region = "nyc1"
size = "512mb"
}
The corresponding playbook might look like:
- hosts: my_web_server
tasks:
- yum: name=cowsay
- command: cowsay hello, world!
Note that the instance was identified by its resource name from the Terraform config, not its instance name from the provider. On AWS, resources are also grouped by their tags. For example:
resource "aws_instance" "my_web_server" {
instance_type = "t2.micro"
ami = "ami-96a818fe"
tags = {
Role = "web"
Env = "dev"
}
}
resource "aws_instance" "my_worker" {
instance_type = "t2.micro"
ami = "ami-96a818fe"
tags = {
Role = "worker"
Env = "dev"
}
}
Can be provisioned separately with:
- hosts: role_web
tasks:
- command: cowsay this is a web server!
- hosts: role_worker
tasks:
- command: cowsay this is a worker server!
- hosts: env_dev
tasks:
- command: cowsay this runs on all dev servers!
Ansible doesn't seem to support calling a dynamic inventory script with params, so if you need to specify the location of your state file or terraform directory, set the TF_STATE
environment variable before running ansible-playbook
, like:
TF_STATE=deploy/terraform.tfstate ansible-playbook --inventory-file=/path/to/terraform-inventory deploy/playbook.yml
or
TF_STATE=../terraform ansible-playbook --inventory-file=/path/to/terraform-inventory deploy/playbook.yml
If TF_STATE
is a file, it parses the file as json, if TF_STATE
is a directory, it runs terraform state pull
inside the directory, which is supports both local and remote terraform state.
It looks for state config in this order
TF_STATE
: environment variable of where to find either a statefile or a terraform projectTI_TFSTATE
: another environment variable similar to TF_STATEterraform.tfstate
: it looks in the state file in the current directory..
: lastly it assumes you are at the root of a terraform project.Alternately, if you need to do something fancier (like downloading your state file from S3 before running), you might wrap this tool with a shell script, and call that instead. Something like:
#!/bin/bash
/path/to/terraform-inventory $@ deploy/terraform.tfstate
Then run Ansible with the script as an inventory:
ansible-playbook --inventory-file=bin/inventory deploy/playbook.yml
This tool returns the public IP of the host by default. If you require the private IP of the instance to run Ansible, set the TF_KEY_NAME
environment variable to private_ip
before running the playbook, like:
TF_KEY_NAME=private_ip ansible-playbook --inventory-file=/path/to/terraform-inventory deploy/playbook.yml
By default, the ip address is the ansible inventory name. The TF_HOSTNAME_KEY_NAME
environment variable allows you to overwrite the source of the ansible inventory name.
TF_HOSTNAME_KEY_NAME=name ansible-playbook --inventory-file=/path/to/terraform-inventory deploy/playbook.yml
It's just a Go app, so the usual:
go get github.com/adammck/terraform-inventory
To test against an example statefile, run:
terraform-inventory --list fixtures/example.tfstate
terraform-inventory --host=52.7.58.202 fixtures/example.tfstate
To update the fixtures, populate fixtures/secrets.tfvars
with your DO and AWS account details, and run fixtures/update
. To run a tiny Ansible playbook on the example resourecs, run:
TF_STATE=fixtures/example.tfstate ansible-playbook --inventory-file=/path/to/terraform-inventory fixtures/playbook.yml
You almost certainly don't need to do any of this. Use the tests instead.
Development of 14, 16, and 22 was generously sponsored by Transloadit.
Author: Adammck
Source Code: https://github.com/adammck/terraform-inventory
License: MIT license
1672315516
This project utilizes Infrastructure as Code and GitOps to automate provisioning, operating, and updating self-hosted services in my homelab. It can be used as a highly customizable framework to build your own homelab.
What is a homelab?
Homelab is a laboratory at home where you can self-host, experiment with new technologies, practice for certifications, and so on. For more information about homelab in general, see the r/homelab introduction.
Project status: ALPHA
This project is still in the experimental stage, and I don't use anything critical on it. Expect breaking changes that may require a complete redeployment. A proper upgrade path is planned for the stable release. More information can be found in the roadmap below.
PC-MK26ECZDR
(Japanese version of the ThinkCentre M700):Intel Core i5-6600T @ 2.70GHz
16GB
128GB
TL-SG108
switch:8
1000Mbps
Some demo videos and screenshots are shown here. They can't capture all the project's features, but they are sufficient to get a concept of it.
Demo |
---|
Deploy with a single command (after updating the configuration files) |
![]() |
PXE boot |
![]() |
Homepage with Ingress discovery powered by Hajimari |
![]() |
Monitoring dashboard powered by Grafana |
![]() |
Git server powered by Gitea |
![]() |
Matrix chat server |
![]() |
Continuous integration with Tekton |
![]() |
Continuous deployment with ArgoCD |
![]() |
Cluster management using Lens |
![]() |
Secret management with Vault |
Logo | Name | Description |
---|---|---|
Ansible | Automate bare metal provisioning and configuration | |
ArgoCD | GitOps tool built to deploy applications to Kubernetes | |
![]() | cert-manager | Cloud native certificate management |
Cloudflare | DNS and Tunnel | |
![]() | Docker | Ephemeral PXE server and convenient tools container |
![]() | ExternalDNS | Synchronizes exposed Kubernetes Services and Ingresses with DNS providers |
![]() | Fedora Server | Base OS for Kubernetes nodes |
Gitea | Self-hosted Git service | |
Grafana | Operational dashboards | |
Helm | The package manager for Kubernetes | |
K3s | Lightweight distribution of Kubernetes | |
Kubernetes | Container-orchestration system, the backbone of this project | |
![]() | Loki | Log aggregation system |
Longhorn | Cloud native distributed block storage for Kubernetes | |
MetalLB | Bare metal load-balancer for Kubernetes | |
NGINX | Kubernetes Ingress Controller | |
Prometheus | Systems monitoring and alerting toolkit | |
![]() | Renovate | Automatically update dependencies |
Tekton | Cloud native solution for building CI/CD systems | |
![]() | Trow | Private container registry |
Vault | Secrets and encryption management system | |
![]() | ZeroTier | VPN without port forwarding |
See roadmap and open issues for a list of proposed features and known issues.
Any contributions you make are greatly appreciated.
Please see contributing guide for more information.
Copyright © 2020 - 2022 Khue Doan
Distributed under the GPLv3 License. See license page or LICENSE.md
file for more information.
References:
cloudflared
processesHere is a list of the contributors who have helped to improve this project. Big shout-out to them!
If you feel you're missing from this list, feel free to add yourself in a PR.
Author: Khuedoan
Source Code: https://github.com/khuedoan/homelab
License: GPL-3.0 license
1671371940
In this Terraform article, we will learn about how Deploy RStudio (Posit) Workbench to AWS using Terraform. If you’re searching for Cloud services, it’s safe to assume you know about Amazon Web Services (AWS). The AWS cloud platform is arguably the most popular cloud provider, contending its place among the likes of Azure, Google, and IBM. AWS offers a large set of cloud-based products including databases, computing, developer tools, and enterprises applications. With the right platform and the right tools, you can improve operational efficiency, minimize risk, and quickly scale projects. But not all platforms are made equal and depending on the needs of the project and programs used by your team, you might find other platforms a more viable solution. However, in this tutorial, we’ll deploy RStudio Workbench to AWS by using Terraform, an infrastructure-as-code tool that can be used to model cloud structure via code.
RStudio Workbench is a powerful tool for creating data science insights. Whether you work with R or Python, Workbench makes developer’s life easier from team collaboration on centralized servers and server access control to authentication and load balancing. You can learn about the many benefits of Workbench over open-source RStudio Server by reading the Workbench documentation (previously RStudio Server Pro).
Continue reading to use Terraform to deploy RStudio Workbench to AWS. And discover how you can close the gap between your current DevOps and what’s possible through digital transformation.
Note: At the time of writing this article, Posit PBC was RStudio PBC. We use RStudio and Posit interchangeably in this text (e.g. RStudio Workbench == Posit Workbench).
Packer is a free and open-source tool for creating golden images for multiple platforms from a single source configuration.
Before you can run packer to build images, you’ll need a few pre-requisites.
The following environment variables should be set before running packer:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
We suggest using aws-vault
for managing AWS profiles and setting these variables automatically.
Store AWS credentials for the appsilon
profile:
$ aws-vault add appsilon
Enter Access Key Id: ABDCDEFDASDASF
Enter Secret Key: %%%
Execute a command (using temporary credentials):
$ aws-vault exec appsilon -- aws s3 ls
bucket_1
bucket_2
Make sure git is installed and then clone the repository with the Packer configuration files for building RStudio Workbench.
$ git clone https://github.com/Appsilon/packer-rstudio-workbench.git
$ cd packer-rstudio-workbench
Fetch Ansible roles defined in requirements.yml
by running the following command:
$ ansible-galaxy install -r requirements.yml --roles-path roles
Next, run the command below to validate AMI:
$ packer validate ami.pkr.hcl
You should see no output if files have no issues.
The table below describes the purpose of variables used by ami.pkr.hcl
template.
Variable | Purpose |
---|---|
aws_region | controls where the destination AMI will be stored once built. |
r_version | Version of R to install on AMI. |
rstudio_version | Version of RStudio Workbench to install on AMI. |
Some configuration blocks used in the template file and their purpose:
source
is used to define the builder Packer will use. In our case, it is the amazon-ebs
builder which is able to create Amazon AMIs backed by EBS volumes for use in EC2. More information: Amazon EBS – Builders | Packer by HashiCorp.source_ami_filter
defines which AMI to use as a base image for our RStudio Workbench image – ubuntu/images/*ubuntu-focal-20.04-amd64-server-*
. Beware that the owners
filter is set to official Canonical Group Limited supplier which is the company behind Ubuntu base AMIs. This way we can ensure the proper image is being used.provisioner
stanza is used to install and configure third-party software on the machine image after booting. Provisioners prepare the system for use, so in our case, we installansible
and somepython
dependencies first, and next, we execute ansible-playbook
to install and configure R and RStudio Workbench on the system. By supplying var.r_version
and var.rstudio_version
(default values defined in ./example.auto.pkvars.hcl
) as extra arguments we can control which versions of corresponding components will be installed.aws-vault
and run packer build
:aws-vault exec appsilon -- packer build ami.pkr.hcl
It will take around 15 minutes for the image to be generated. Once the process is completed, you’ll see the image under Services -> EC2 -> AMIs
.
The AMI ID will be output at the end.
https://asciinema.org/a/431363
Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version infrastructure safely and efficiently. You can run Terraform on several operating systems including macOS, Linux, and Windows. To see the full list explore the Terraform download documentation.
Is your enterprise missing out on AI innovations? Maybe it’s time to build an AI model for your business.
Before you can run Terraform to deploy RStudio Workbench to AWS, you’ll need the following pre-requisites:
The set of files used to describe infrastructure in Terraform is known as a Terraform configuration. You will write your first configuration to create a single RStudio Workbench instance, but prior to that you will have to create resources like:
Each Terraform configuration must be in its own working directory. Create a directory for your configuration first.
$ mkdir rstudio-workbench-infrastructure
Change into the directory.
$ cd rstudio-workbench-instance
Create a file to define your initial Terraform configuration:
touch terraform.tf
Open terraform.tf
in your text editor, paste in the configuration below, and save the file.
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.53.0"
}
}
}
The terraform {}
block contains Terraform settings, including the required providers Terraform will use to provision your infrastructure. For each provider, the source attribute defines an optional hostname, a namespace, and the provider type. Terraform installs providers from the Terraform Registry by default. In this example configuration, the AWS provider’s source is defined as hashicorp/aws
, which is shorthand for registry.terraform.io/hashicorp/aws
(Build Infrastructure | Terraform – HashiCorp Learn). required_version
specifies a minimum required version of Terraform to be installed on your machine.
Create a file to define your VPC.
touch vpc.tf
Open vpc.tf
in your text editor, paste in the configuration below, and save the file.
locals {
vpc_name = "vpc-rstudio"
vpc_cidr = "10.1.0.0/16"
region = "eu-west-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.6.0"
name = local.vpc_name
cidr = local.vpc_cidr
azs = formatlist(format("%s%%s", local.region), ["a", "b"])
private_subnets = ["${cidrsubnet(local.vpc_cidr, 3, 0)}", "${cidrsubnet(local.vpc_cidr, 3, 2)}"]
public_subnets = ["${cidrsubnet(local.vpc_cidr, 4, 2)}", "${cidrsubnet(local.vpc_cidr, 4, 6)}"]
manage_default_security_group = true
default_security_group_ingress = [{}]
default_security_group_egress = [{}]
}
We are using a community Terraform module that creates VPC resources on AWS. We specify some custom parameters, like vpc_name
, vpc_cidr
and region in which the resources shall be created. For more information on how this module works, make sure to read module documentation.
Create a file to define your security groups, which will act as a virtual firewall for your RStudio Workbench instances to control incoming and outgoing traffic.
touch sg.tf
Open sg.tf
in your text editor, paste in the configuration below, and save the file.
module "security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "4.3.0"
name = "rstudio-workbench-sg"
description = "Security group for usage with Rstudio Workbench EC2 instance"
vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = ["0.0.0.0/0"]
ingress_rules = ["http-80-tcp", "https-443-tcp", "ssh-tcp", "all-icmp"]
ingress_with_cidr_blocks = [
{
from_port = 8787
to_port = 8787
protocol = "tcp"
description = "User-service ports (ipv4)"
cidr_blocks = "0.0.0.0/0"
},
]
egress_rules = ["all-all"]
}
We are using a community Terraform module that creates security group resources on AWS. We specify some custom parameters, like vpc_id
from vpc.tf
file where we defined our VPC in the previous step, security group name, ingress, and egress rules.
We allow all incoming connections on ports 80, 443, 22 (SSH), 8787 (default RStudio Workbench port) and also allow for ICMP requests. We allow all outgoing connections so that our instance can reach out to the internet. For more information on how this module works, make sure to read module documentation.
Create a file to define your RStudio Workbench and EC2 instance.
touch ec2.tf
Open ec2.tf
in your text editor, paste in the configuration below, and save the file.
data "aws_ami" "default" {
most_recent = true
owners = ["self"]
filter {
name = "name"
values = [
"RStudioWorkbench*"
]
}
}
resource "aws_key_pair" "auth" {
key_name = "rstudio-workbench-ssh-key"
public_key = file("~/.ssh/id_rsa.pub")
}
module "ec2_instance" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "2.19.0"
instance_count = 1
name = "RStudio Workbench"
ami = data.aws_ami.default.id
instance_type = "m5.4xlarge"
subnet_ids = module.vpc.public_subnets
vpc_security_group_ids = [module.security_group.security_group_id]
associate_public_ip_address = true
key_name = aws_key_pair.auth.id
root_block_device = [
{
delete_on_termination = true
device_name = "/dev/sda1"
encrypted = true
volume_size = 180
volume_type = "gp2"
}
]
}
We use aws_ami
as a data source to get the ID of RStudio Workbench AMI we built using Packer previously. aws_key_pair
provides an EC2 key pair resource. A key pair is used to control login access to EC2 instances. We need to specify a path to a public SSH key on our local workstation so that later we can connect to our EC2. Use Amazon’s guide on generating keys to create an SSH key in case you do not have any yet.
We are also using a community Terraform module that creates an EC2 resource on AWS. We specify some custom parameters, like:
instance_count
– number of instances to runami
– an ID of RStudio Workbench AMIinstance_type
– the flavor of our server (refer to instance types for available types and pricing)subnet_ids
– network subnets to run the server in (created in vpc.tf
previously)key_name
– SSH key to associate with the instancevpc_security_group_ids
– a list of security groups that should be associated with our instanceassociate_public_ip_address
– a boolean flag that indicates whether a publicly accessible IP should be assigned to our instance, we set this to true
to be able to access our RStudio Workbench over the internetroot_block_device
– a configuration block to extend default storage for our instanceFor more information on how this module works, make sure to read the module documentation.
Optionally, we can create an ALB (Application Load Balancer), a feature of Elastic Load Balancing that allows a developer to configure and route incoming end-user traffic to applications based in the AWS public cloud. In case you have a custom domain – we can use ALB to configure access to RStudio Workbench over a human-friendly DNS record, something like https://workbench.appsilon.com
. To do so, create a file alb.tf
:
touch alb.tf
Open alb.tf
in your text editor, paste in the configuration below and save the file.
data "aws_route53_zone" "default" {
name = "your.domain.com"
}
resource "aws_route53_record" "public" {
zone_id = data.aws_route53_zone.default.zone_id
name = "workbench.your.domain.com"
type = "A"
alias {
name = aws_lb.main.dns_name
zone_id = aws_lb.main.zone_id
evaluate_target_health = false
}
}
module "acm" {
source = "terraform-aws-modules/acm/aws"
version = "~> 3.2.0"
zone_id = data.aws_route53_zone.default.zone_id
domain_name = "workbench.your.domain.com"
wait_for_validation = true
}
resource "aws_lb" "main" {
name = "rstudio-workbench-alb"
subnets = module.vpc.public_subnets
security_groups = [module.security_group.security_group_id]
}
resource "aws_lb_target_group" "rstudio" {
name = coalesce(format("rstudio-workbench-https"))
port = 8787
protocol = "HTTP"
vpc_id = module.vpc.vpc_id
target_type = "instance"
health_check {
enabled = true
interval = 30
path = "/"
port = 8787
healthy_threshold = 3
unhealthy_threshold = 3
timeout = 6
protocol = "HTTP"
matcher = "200-399"
}
# Ensure the ALB exists before things start referencing this target group.
depends_on = [aws_lb.main]
}
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.main.id
port = "80"
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_lb_listener" "https" {
load_balancer_arn = aws_lb.main.id
port = "443"
protocol = "HTTPS"
certificate_arn = module.acm.acm_certificate_arn
default_action {
target_group_arn = aws_lb_target_group.rstudio.id
type = "forward"
}
}
resource "aws_lb_listener_certificate" "main" {
listener_arn = aws_lb_listener.https.arn
certificate_arn = module.acm.acm_certificate_arn
}
resource "aws_lb_target_group_attachment" "main" {
count = 1
target_group_arn = aws_lb_target_group.rstudio.arn
target_id = element(module.ec2_instance.id, count.index)
port = 8787
}
We are using the ACM community Terraform module here. The module creates an ACM certificate and validates it using Route53 DNS. ACM modules rely on aws_route53_zone
data retrieved from your account, hence we need to specify the existing zone name here. The module documentation contains pertinent information and updates. The rest of the file configures https
access to our RStudio Workbench instance defined in ec2.tf
. creates a load balancer, and a human-friendly DNS entry. All http
access will be redirected to https
.
When you create a new configuration — or check out an existing configuration from version control — you need to initialize the directory with terraform init
.
Initializing a configuration directory downloads and installs the providers defined in the configuration, which in this case is the aws
provider.
Initialize the directory.
$ terraform init
Initializing modules...
Downloading terraform-aws-modules/acm/aws 3.2.0 for acm...
- acm in .terraform/modules/acm
Downloading terraform-aws-modules/ec2-instance/aws 2.19.0 for ec2_instance...
- ec2_instance in .terraform/modules/ec2_instance
Downloading terraform-aws-modules/security-group/aws 4.3.0 for security_group...
- security_group in .terraform/modules/security_group
Downloading terraform-aws-modules/vpc/aws 3.6.0 for vpc...
- vpc in .terraform/modules/vpc
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 2.42.0, >= 2.53.0, >= 3.24.0, >= 3.28.0, ~> 3.53.0"...
- Installing hashicorp/aws v3.53.0...
- Installed hashicorp/aws v3.53.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Next, execute terraform validate
to make sure the configuration is valid.
terraform validate
Success! The configuration is valid.
Apply the configuration now with the aws-vault exec appsilon -- terraform apply
command. Terraform will print output similar to what is shown below. We have truncated some of the output to save space.
Beware:
apply
command should be executed from withinaws-vault
environment so that Terraform can access your AWS account.
Before it applies any changes, Terraform prints out the execution plan which describes the actions Terraform will take in order to change your infrastructure to match the configuration.
Terraform will now pause and wait for your approval before proceeding. If anything in the plan seems incorrect or dangerous, it is safe to abort here with no changes made to your infrastructure.
In this case, the plan is acceptable, so type yes at the confirmation prompt to proceed. Executing the plan will take a few minutes since Terraform waits for the EC2 instance to become available.
You have now created infrastructure using Terraform! Visit the EC2 console and find your new instance up and running. Use the EC2 console to grab the IP address in case you would want to connect to it over SSH.
Note: Per the
aws
provider block, your instance was created in theeu-west-1
region. Ensure that your AWS Console is set to this region.
You can also access your RStudio Workbench instance directly over IP or over DNS (if deployed with optional part – Application Load Balancer).
https://asciinema.org/a/431374
The terraform destroy
command terminates resources managed by your Terraform project. This command is the inverse of terraform apply
in that it terminates all the resources specified in your Terraform state. It does not destroy resources running elsewhere that are not managed by the current Terraform project. Use it with precautions to remove the AWS resources you have created during this tutorial to reduce your AWS bill.
Beware:
destroy
command should be executed from withinaws-vault
environment so that Terraform can access your AWS account.
You can find the complete example of the Terraform code above in Appsilon’s Github repo Terraform AWS RStudio Workbench example.
Whether you’re well entrenched in your cloud transformation or are just starting out, the most important thing you can do is keep learning. DevOps through the cloud can improve agility and time to market, but there’s always room for improvement. You might consider decoupling applications from physical resources and designing them to be cloud-native. Or make the most of RStudio Workbench by automating performance testing to validate efficient use of resources. You can also be selective when migrating existing applications to prioritize and make sure the cost is justified.
Original article sourced at: https://appsilon.com
1671198720
Choosing between infrastructure-as-code tools CloudFormation and Terraform can be arduous. It’s helpful to have some advice from someone with practical experience scaling apps using both technologies.
If, like me, you scoured the internet to help you choose between CloudFormation and Terraform as your next infrastructure-as-code (IaC) tool without finding a definitive answer, I shared your pain for a long while. Now, I have significant experience with both tools and I can make an informed decision about which one to use.
For your IaC project on AWS, choose CloudFormation, because:
One difference between CloudFormation and Terraform is how code and instantiations relate to each other within each service.
CloudFormation has the concept of a stack, which is the instantiation of a template. The same template can be instantiated ad infinitum by a given client in a given account, across accounts, or by different clients.
Terraform has no such concept and requires a one-to-one relationship between code and its instantiation. It would be similar to duplicating the source code of a web server for each server you want to run, or duplicating the code every time you need to run an application instead of running the compiled version.
This point is quite trivial in the case of a simple setup, but it quickly becomes a major pain point for medium- to large-scale operations. In Terraform, every time you need to spin up a new stack from existing code, you need to duplicate the code. And copy/pasting script files is a very easy way to sabotage yourself and corrupt resources that you didn’t intend to touch.
Terraform actually doesn’t have a concept of stacks like CloudFormation, which clearly shows that Terraform has been built from the ground up to have a one-to-one match between code and the resources it manages. This was later partially rectified by the concept of environments (which have since been renamed “workspaces”), but the way to use those makes it incredibly easy to deploy to an unwanted environment. This is because you have to run terraform workspace select
prior to deploying, and forgetting this step will deploy to the previously selected workspace, which might or might not be the one you want.
In practice, it is true that this problem is mitigated by the use of Terraform modules, but even in the best case, you would require a significant amount of boilerplate code. In fact, this problem was so acute that people needed to create a wrapper tool around Terraform to address this problem: Terragrunt.
Another important difference between CloudFormation and Terraform is how they each manage state and permissions.
CloudFormation manages stack states for you and doesn’t give you any options. But CloudFormation stack states have been solid in my experience. Also, CloudFormation allows less privileged users to manage stacks without having all the necessary permissions required by the stack itself. This is because CloudFormation can get the permissions from a service role attached to the stack rather than the permissions from the user running the stack operation.
Terraform requires you to provide it with some back ends to manage states. The default is a local file, which is totally unsatisfactory, given:
So you need a robust and shared state, which on AWS is usually achieved by using an S3 bucket to store the state file, accompanied by a DynamoDB table to handle concurrency.
This means you need to create an S3 bucket and DynamoDB table manually for each stack you want to instantiate, and also manage permissions manually for these two objects to restrict less privileged users from having access to data they should not have access to. If you have just a couple of stacks, that won’t be too much of a problem, but if you have 20 stacks to manage, that does become very cumbersome.
By the way, when using Terraform workspaces, it is not possible to have one DynamoDB table per workspace. This means that if you want an IAM user with minimal permissions to perform deployments, that user will be able to fiddle with the locks of all workspaces because DynamoDB permissions are not fine-grained to the item level.
On this point, both CloudFormation and Terraform can be a bit tricky. If you change the logical ID (i.e., the name) of a resource, both will consider that the old resource must be destroyed and a new one created. So it’s generally a bad idea to change the logical ID of resources in either tool, especially for nested stacks in CloudFormation.
As mentioned in the first section, Terraform doesn’t handle basic dependencies. Unfortunately, the Terraform developers aren’t giving the long-standing issue much attention, despite the apparent lack of workarounds.
Given that proper dependency management is absolutely critical to an IaC tool, such issues in Terraform call its suitability into question as soon as business-critical operations are involved, such as deploying to a production environment. CloudFormation gives a much more professional feel, and AWS is always very attentive to making sure that it offers production-grade tools to its clients. In all the years I have been using CloudFormation, I’ve never come across a problem with dependency management.
CloudFormation allows a stack to export some of its output variables, which can then be reused by other stacks. To be honest, this functionality is limited, as you won’t be able to instantiate more than one stack per region. This is because you can’t export two variables with the same name, and exported variables don’t have namespaces.
Terraform offers no such facilities, so you are left with less desirable options. Terraform allows you to import the state of another stack, but that gives you access to all the information in that stack, including the many secrets that are stored in the state. Alternatively, a stack can export some variables in the form of a JSON file stored in an S3 bucket, but again, this option is more cumbersome: You have to decide which S3 bucket to use and give it the appropriate permissions, and write all the plumbing code yourself on both the writer and reader sides.
One advantage of Terraform is that it has data sources. Terraform can thus query resources not managed by Terraform. However, in practice, this has little relevance when you want to write a generic template because you then won’t assume anything about the target account. The equivalent in CloudFormation is to add more template parameters, which thus involves repetition and potential for errors; however, in my experience, this has never been a problem.
Back to the issue of Terraform’s dependency management, another example is you get an error when you try to update the settings for a load balancer and get the following:
Error: Error deleting Target Group: ResourceInUse: Target group 'arn:aws:elasticloadbalancing:us-east-1:723207552760:targetgroup/strategy-api-default-us-east-1/14a4277881e84797' is currently in use by a listener or a rule
status code: 400, request id: 833d8475-f702-4e01-aa3a-d6fa0a141905
The expected behavior would be that Terraform detects that the target group is a dependency of some other resource that is not being deleted, and consequently, it should not try to delete it—but neither should it throw an error.
Although Terraform is a command-line tool, it is very clear that it expects a human to run it, as it is very much interactive. It is possible to run it in batch mode (i.e., from a script), but this requires some additional command-line arguments. The fact that Terraform has been developed to be run by humans by default is quite puzzling, given that an IaC tool’s purpose is automation.
Terraform is difficult to debug. Error messages are often very basic and don’t allow you to understand what is going wrong, in which case you will have to run Terraform with TF_LOG=debug
, which produces a huge amount of output to trawl through. Complicating this, Terraform sometimes makes API calls to AWS that fail, but the failure is not a problem with Terraform. In contrast, CloudFormation provides reasonably clear error messages with enough details to allow you to understand where the problem is.
An example Terraform error message:
Error: error reading S3 bucket Public Access Block: NoSuchBucket: The specified bucket does not exist
status code: 404, request id: 19AAE641F0B4AC7F, host id: rZkgloKqxP2/a2F6BYrrkcJthba/FQM/DaZnj8EQq/5FactUctdREq8L3Xb6DgJmyKcpImipv4s=
The above error message shows a clear error message which actually doesn’t reflect the underlying problem (which in this case was a permissions issue).
This error message also shows how Terraform can sometimes paint itself into a corner. For example, if you create an S3 bucket and an aws_s3_bucket_public_access_block
resource on that bucket, and if for some reason you make some changes in the Terraform code that destroys that bucket—e.g., in the “change implies delete and create” gotcha described above—Terraform will get stuck trying to load the aws_s3_bucket_public_access_block
but continually failing with the above error. The correct behavior from Terraform would be to replace or delete the aws_s3_bucket_public_access_block
as appropriate.
Lastly, you can’t use the CloudFormation helper scripts with Terraform. This might be an annoyance, especially if you’re hoping to use cfn-signal, which tells CloudFormation that an EC2 instance has finished initializing itself and is ready to serve requests.
Syntax-wise, Terraform does have a good advantage compared to CloudFormation—it supports loops. But in my own experience, this feature can turn out to be a bit dangerous. Typically, a loop would be used to create a number of identical resources; however, when you want to update the stack with a different count, there might be a chance that you might need to link the old and the new resources (for example, using zipmap()
to combine values from two arrays which now happen to be of different sizes because one array has the size of the old loop size and the other has the size of the new loop size). It is true that such a problem can happen without loops, but without loops, the problem would be much more evident to the person writing the script. The use of loops in such a case obfuscates the problem.
Whether Terraform’s syntax or CloudFormation’s syntax is better is mostly a question of preferences. CloudFormation initially supported only JSON, but JSON templates are very hard to read. Fortunately, CloudFormation also supports YAML, which is much easier to read and allows comments. CloudFormation’s syntax tends to be quite verbose, though.
Terraform’s syntax uses HCL, which is a kind of JSON derivative and is quite idiosyncratic. Terraform offers more functions than CloudFormation, and they are usually easier to make sense of. So it could be argued that Terraform does have a slight advantage on this point.
Another advantage of Terraform is its readily available set of community-maintained modules, and this does simplify writing templates. One issue might be that such modules might not be secure enough to comply with an organization’s requirements. So for organizations requiring a high level of security, reviewing these modules (as well as further versions as they come) might be a necessity.
Generally speaking, Terraform modules are much more flexible than CloudFormation nested stacks. A CloudFormation nested stack tends to hide everything below it. From the nesting stack, an update operation would show that the nested stack will be updated but doesn’t show in detail what is going to happen inside the nested stack.
A final point, which could be contentious actually, is that CloudFormation attempts to roll back failed deployments. This is quite an interesting feature but can unfortunately be very long (for example, it could take up to three hours for CloudFormation to decide that a deployment to Elastic Container Service has failed). In contrast, in the case of failure, Terraform just stops wherever it was. Whether a rollback feature is a good thing or not is debatable, but I have come to appreciate the fact that a stack is maintained in a working state as much as possible when a longer wait happens to be an acceptable tradeoff.
Terraform does have advantages over CloudFormation. The most important one, in my opinion, is that when applying an update, Terraform shows you all the changes you are about to make, including drilling down into all the modules it is using. By contrast, CloudFormation, when using nested stacks, only shows you that the nested stack needs updating, but doesn’t provide a way to drill down into the details. This can be frustrating, as this type of information is quite important to know before hitting the “go” button.
Both CloudFormation and Terraform support extensions. In CloudFormation, it is possible to manage so-called “custom resources” by using an AWS Lambda function of your own creation as a back end. For Terraform, extensions are much easier to write and form part of the code. So there is an advantage for Terraform in this case.
Terraform can handle many cloud vendors. This puts Terraform in a position of being able to unify a given deployment among multiple cloud platforms. For example, say you have a single workload spread between AWS and Google Cloud Platform (GCP). Normally, the AWS part of the workload would be deployed using CloudFormation, and the GCP part using GCP’s Cloud Deployment Manager. With Terraform, you could instead use a single script to deploy and manage both stacks in their respective cloud platforms. In this way, you only have to deploy one stack instead of two.
There are quite a few non-arguments that continue to circulate around the internet. The biggest one thrown around is that because Terraform is multi-cloud, you can use one tool to deploy all your projects, no matter in what cloud platform they are done. Technically, this is true, but it’s not the big advantage it may appear to be, especially when managing typical single-cloud projects. The reality is that there is an almost one-to-one correspondence between resources declared in (for example) CloudFormation and the same resources declared in a Terraform script. Since you have to know the details of cloud-specific resources either way, the difference comes down to syntax, which is hardly the biggest pain point in managing deployments.
Some argue that by using Terraform, one can avoid vendor lock-in. This argument doesn’t hold in the sense that by using Terraform, you are locked in by HashiCorp (the creator of Terraform), just in the same way that by using CloudFormation, you are locked in by AWS, and so on for other cloud platforms.
The fact that Terraform modules are easier to use is to me of lesser importance. First of all, I believe that AWS deliberately wants to avoid hosting a single repository for community-based CloudFormation templates because of the perceived responsibility for user-made security holes and breaches of various compliance programs.
At a more personal level, I fully understand the benefits of using libraries in the case of software development, as those libraries can easily run into the tens of thousands of lines of code. In the case of IaC, however, the size of the code is usually much less, and such modules are usually a few dozen lines long. Using copy/paste is actually not that bad an idea in the sense that it avoids problems with maintaining compatibility and delegating your security to unknown people.
Using copy/paste is frowned upon by many developers and DevOps engineers, and there are good reasons behind this. However, my point of view is that using copy/paste for snippets of code allows you to easily tailor it to your needs, and there is no need to make a library out of it and spend a lot of time to make it generic. The pain of maintaining those snippets of code is usually very low, unless your code becomes duplicated in, say, a dozen or more templates. In such a case, appropriating the code and using it as nested stacks makes sense, and the benefits of not repeating yourself are probably greater than the annoyance of not being able to see what’s going to be updated inside the nested stack when you perform an update operation.
With CloudFormation, AWS wants to provide its customers with a rock-solid tool which will work as intended at all times. Terraform’s team does too, of course—but it seems that a crucial aspect of their tooling, dependency management, is unfortunately not a priority.
Terraform might have a place in your project, especially if you have a multi-cloud architecture, in which case Terraform scripts are one way to unify the management of resources across the various cloud vendors you are using. But you could still avoid Terraform’s downsides in this case by only using Terraform to manage stacks already implemented using their respective cloud-specific IaC tools.
The overall feeling Terraform vs. CloudFormation is that CloudFormation, although imperfect, is more professional and reliable, and I would definitely recommend it for any project that isn’t specifically multi-cloud.
Original article source at: https://www.toptal.com/
1671112500
In this Kubernetes article we will learn about Deploy Java Microservices on Amazon EKS Using Terraform and Kubernetes. When it comes to infrastructure, public clouds are the most popular choice these days, especially Amazon Web Services (AWS). If you are in one of those lucky or unlucky (depending on how you see it) teams running microservices, then you need a way to orchestrate their deployments. When it comes to orchestrating microservices, Kubernetes is the de-facto choice. Most public cloud providers also provide managed Kubernetes as a service; for example, Google provides Google Kubernetes Engine (GKE), Microsoft provides Azure Kubernetes Service (AKS), and Amazon provides Amazon Elastic Kubernetes Service (EKS).
This doesn’t mean that deploying and managing microservices on the public cloud is easy; each cloud service comes with its own challenges and pain. This is especially true for Amazon EKS, which, in my opinion, is the most difficult Kubernetes service to use, but also one of the most flexible. This is because EKS consists of some clever orchestrations doing a complex dance on top of other AWS services like EC2, EBS, etc.
If you want to run a microservice stack on EKS, you will need to spend some extra time and effort setting it up and managing it. This is where infrastructure as code (IaC) tools like Terraform come in handy.
So here is what you will learn to do today:
At this point, the first question that might pop up in your mind would be, “Why not use CloudFormation?”. It’s a good question; after all, CloudFormation is built by AWS and hence sounds like an excellent solution to manage AWS resources. But anyone who has tried both CloudFormation and Terraform will probably tell you to forget that CloudFormation even exists. I think CloudFormation is far more complex and less developer-friendly than Terraform. You also need to write a lot more boilerplate with CloudFormation in YAML or JSON. Yikes! And most importantly, Terraform is far more powerful and flexible than CloudFormation. It’s cross-platform, which means you can take care of all your infrastructure management needs on any platform with one tool.
You need a microservice stack to deploy to the cluster. I’m using a microservice stack scaffolded for demo purposes using JHipster. You can use another microservice stack if you want. If you prefer using the same application as in this demo, then you can either scaffold it using JHipster JDL or clone the sample repository from GitHub.
Option 1: Scaffold the microservice stack using JHipster
mkdir jhipster-microservice-stack
cd jhipster-microservice-stack
# download the JDL file.
jhipster download https://raw.githubusercontent.com/oktadev/okta-jhipster-k8s-eks-microservices-example/main/apps.jdl
# Update the `dockerRepositoryName` property to use your Docker Repository URI/Name.
# scaffold the apps.
jhipster jdl apps.jdl
Option 2: Clone the sample repository
git clone https://github.com/oktadev/okta-jhipster-k8s-eks-microservices-example
In either case, remember to change the Docker repository name to match your Docker repository.
The JHipster scaffolded sample application has a gateway application, two microservices, and uses JHipster Registry for service discovery and centralized configuration.
Now let us move on to the important part of the tutorial. Creating an EKS cluster in AWS is not as straightforward as in Google Cloud Platform (GCP). You need to also create a lot more resources for everything to work correctly without surprises. You will be using a bunch of Terraform providers to help with this, and you will also use some prebuilt Terraform modules like AWS VPC Terraform module and Amazon EKS Blueprints for Terraform to reduce the amount of boilerplate you need to write.
These are the AWS resources and VPC architecture you will create:
First, make sure you use a specific version of the providers as different versions might use different attributes and features. Create a versions.tf
file:
mkdir terraform
cd terraform
touch versions.tf
Add the following to the file:
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.72"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
random = {
source = "hashicorp/random"
version = ">= 3.2.0"
}
nullres = {
source = "hashicorp/null"
version = ">= 3.1"
}
}
}
Next, you need to define variables and configure the providers. Create a config.tf
file:
touch config.tf
Add the following to the file:
# ## To save state in s3. Update to suit your needs
# backend "s3" {
# bucket = "create-an-s3-bucket-and-provide-name-here"
# region = local.region
# key = "eks-cluster-with-new-vpc/terraform.tfstate"
# }
variable "region" {
default = "eu-west-1"
description = "AWS region"
}
resource "random_string" "suffix" {
length = 8
special = false
}
data "aws_availability_zones" "available" {}
locals {
name = "okta-jhipster-eks-${random_string.suffix.result}"
region = var.region
cluster_version = "1.22"
instance_types = ["t2.large"] # can be multiple, comma separated
vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = {
Blueprint = local.name
GitHubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
}
provider "aws" {
region = local.region
}
# Kubernetes provider
# You should **not** schedule deployments and services in this workspace.
# This keeps workspaces modular (one for provision EKS, another for scheduling
# Kubernetes resources) as per best practices.
provider "kubernetes" {
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks_blueprints.eks_cluster_id]
}
}
provider "helm" {
kubernetes {
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks_blueprints.eks_cluster_id]
}
}
}
You can uncomment the backend
section above to save state in S3 instead of your local filesystem. This is recommended for production setup so that everyone in a team has the same state. This file defines configurable and local variables used across the workspace and configures some of the providers used. The Kubernetes provider is included in this file so the EKS module can complete successfully. Otherwise, it throws an error when creating kubernetes_config_map.aws_auth
. The helm provider is used to install Kubernetes add-ons to the cluster.
Next up, you need a VPC, subnets, route tables, and other networking bits. You will use the vpc
module from the terraform-aws-modules
repository. This module is a wrapper around the AWS VPC module. It makes it easier to configure VPCs and all the other required networking resources. Create a vpc.tf
file:
touch vpc.tf
Add the following to the file:
#---------------------------------------------------------------
# VPC, Subnets, Internet gateway, Route tables, etc.
#---------------------------------------------------------------
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 10)]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
# Manage so we can name
manage_default_network_acl = true
default_network_acl_tags = { Name = "${local.name}-default" }
manage_default_route_table = true
default_route_table_tags = { Name = "${local.name}-default" }
manage_default_security_group = true
default_security_group_tags = { Name = "${local.name}-default" }
public_subnet_tags = {
"kubernetes.io/cluster/${local.name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
tags = local.tags
}
This will create:
Now that you have the networking part done, you can build configurations for the EKS cluster and its add-ons. You will use the eks_blueprints
module from terraform-aws-eks-blueprints
, which is a wrapper around the terraform-aws-modules
and provides additional modules to configure EKS add-ons.
Create a eks-cluster.tf
file:
touch eks-cluster.tf
Add the following to the file:
#---------------------------------------------------------------
# EKS cluster, worker nodes, security groups, IAM roles, K8s add-ons, etc.
#---------------------------------------------------------------
module "eks_blueprints" {
source = "github.com/aws-ia/terraform-aws-eks-blueprints?ref=v4.0.9"
cluster_name = local.name
cluster_version = local.cluster_version
vpc_id = module.vpc.vpc_id
private_subnet_ids = module.vpc.private_subnets
managed_node_groups = {
node = {
node_group_name = "managed-ondemand"
instance_types = local.instance_types
min_size = 2
subnet_ids = module.vpc.private_subnets
}
}
tags = local.tags
}
module "eks_blueprints_kubernetes_addons" {
source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.0.9"
eks_cluster_id = module.eks_blueprints.eks_cluster_id
eks_cluster_endpoint = module.eks_blueprints.eks_cluster_endpoint
eks_oidc_provider = module.eks_blueprints.oidc_provider
eks_cluster_version = module.eks_blueprints.eks_cluster_version
# EKS Managed Add-ons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
# K8S Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_cloudwatch_metrics = false
tags = local.tags
}
# To update local kubeconfig with new cluster details
resource "null_resource" "kubeconfig" {
depends_on = [module.eks_blueprints_kubernetes_addons]
provisioner "local-exec" {
command = "aws eks --region ${local.region} update-kubeconfig --name $AWS_CLUSTER_NAME"
environment = {
AWS_CLUSTER_NAME = local.name
}
}
}
The eks_blueprints
module definition creates:
The eks_blueprints_kubernetes_addons
module definition creates:
The null_resource
configuration updates your local kubeconfig with the new cluster details. It’s not a required step for provisioning but just a handy hack.
Finally, you can also define some outputs to be captured. Create a outputs.tf
file:
touch outputs.tf
Add the following to the file:
output "vpc_private_subnet_cidr" {
description = "VPC private subnet CIDR"
value = module.vpc.private_subnets_cidr_blocks
}
output "vpc_public_subnet_cidr" {
description = "VPC public subnet CIDR"
value = module.vpc.public_subnets_cidr_blocks
}
output "vpc_cidr" {
description = "VPC CIDR"
value = module.vpc.vpc_cidr_block
}
output "eks_cluster_id" {
description = "EKS cluster ID"
value = module.eks_blueprints.eks_cluster_id
}
output "eks_managed_nodegroups" {
description = "EKS managed node groups"
value = module.eks_blueprints.managed_node_groups
}
output "eks_managed_nodegroup_ids" {
description = "EKS managed node group ids"
value = module.eks_blueprints.managed_node_groups_id
}
output "eks_managed_nodegroup_arns" {
description = "EKS managed node group arns"
value = module.eks_blueprints.managed_node_group_arn
}
output "eks_managed_nodegroup_role_name" {
description = "EKS managed node group role name"
value = module.eks_blueprints.managed_node_group_iam_role_names
}
output "eks_managed_nodegroup_status" {
description = "EKS managed node group status"
value = module.eks_blueprints.managed_node_groups_status
}
output "configure_kubectl" {
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_blueprints.configure_kubectl
}
Our Terraform definitions are ready. Now you can provision the cluster. First, initialize Terraform workspace and plan the changes:
# download modules and providers. Initialize state.
terraform init
# see a preview of what will be done
terraform plan
Review the plan and make sure everything is correct. Ensure you have configured your AWS CLI and IAM Authenticator to use the correct AWS account. If not, run the following:
# Visit https://console.aws.amazon.com/iam/home?#/security_credentials for creating access keys
aws configure
Now you can apply the changes:
terraform apply
Confirm by typing yes
when prompted. This will take a while (15-20 minutes), so sit back and have a coffee or contemplate what led you to this point in life. 😉
Once the EKS cluster is ready, you will see the output variables printed to the console.
configure_kubectl = "aws eks --region eu-west-1 update-kubeconfig --name okta-tf-demo"
eks_cluster_id = "okta-tf-demo"
eks_managed_nodegroup_arns = tolist([
"arn:aws:eks:eu-west-1:216713166862:nodegroup/okta-tf-demo/managed-ondemand-20220610125341399700000010/f0c0a6d6-b8e1-cf91-3d21-522552d6bc2e",
])
eks_managed_nodegroup_ids = tolist([
"okta-tf-demo:managed-ondemand-20220610125341399700000010",
])
eks_managed_nodegroup_role_name = tolist([
"okta-tf-demo-managed-ondemand",
])
eks_managed_nodegroup_status = tolist([
"ACTIVE",
])
eks_managed_nodegroups = tolist([
{
"node" = {
"managed_nodegroup_arn" = [
"arn:aws:eks:eu-west-1:216713166862:nodegroup/okta-tf-demo/managed-ondemand-20220610125341399700000010/f0c0a6d6-b8e1-cf91-3d21-522552d6bc2e",
]
"managed_nodegroup_iam_instance_profile_arn" = [
"arn:aws:iam::216713166862:instance-profile/okta-tf-demo-managed-ondemand",
]
"managed_nodegroup_iam_instance_profile_id" = [
"okta-tf-demo-managed-ondemand",
]
"managed_nodegroup_iam_role_arn" = [
"arn:aws:iam::216713166862:role/okta-tf-demo-managed-ondemand",
]
"managed_nodegroup_iam_role_name" = [
"okta-tf-demo-managed-ondemand",
]
"managed_nodegroup_id" = [
"okta-tf-demo:managed-ondemand-20220610125341399700000010",
]
"managed_nodegroup_launch_template_arn" = []
"managed_nodegroup_launch_template_id" = []
"managed_nodegroup_launch_template_latest_version" = []
"managed_nodegroup_status" = [
"ACTIVE",
]
}
},
])
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
vpc_private_subnet_cidr = [
"10.0.10.0/24",
"10.0.11.0/24",
"10.0.12.0/24",
]
vpc_public_subnet_cidr = [
"10.0.0.0/24",
"10.0.1.0/24",
"10.0.2.0/24",
]
You should see the cluster details if you run kdash
or kubectl get nodes
commands.
Note: The EKS cluster defined here will not come under AWS free tier; hence, running this will cost money, so delete the cluster as soon as you finish the tutorial to keep the cost within a few dollars.
You can proceed to deploy the sample application. You could skip this step if you used a sample that does not use Okta or OIDC for authentication.
First, navigate to the store application folder.
Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register
to sign up for a new account. If you already have an account, run okta login
. Then, run okta apps create jhipster
. Select the default app name, or change it as you see fit. Accept the default Redirect URI values provided for you.
What does the Okta CLI do?
The Okta CLI streamlines configuring a JHipster app and does several things for you:
http://localhost:8080/login/oauth2/code/oidc
and http://localhost:8761/login/oauth2/code/oidc
http://localhost:8080
and http://localhost:8761
ROLE_ADMIN
and ROLE_USER
groups that JHipster expectsROLE_ADMIN
and ROLE_USER
groupsgroups
claim in your default authorization server and adds the user’s groups to itNOTE: The http://localhost:8761*
redirect URIs are for the JHipster Registry, which is often used when creating microservices with JHipster. The Okta CLI adds these by default.
You will see output like the following when it’s finished:
Okta application configuration has been written to: /path/to/app/.okta.env
Run cat .okta.env
(or type .okta.env
on Windows) to see the issuer and credentials for your app. It will look like this (except the placeholder values will be populated):
export SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_OIDC_ISSUER_URI="https://{yourOktaDomain}/oauth2/default"
export SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_ID="{clientId}"
export SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_SECRET="{clientSecret}"
NOTE: You can also use the Okta Admin Console to create your app. See Create a JHipster App on Okta for more information.
Note: Make sure to add the newly created
.okta.env
file to your.gitignore
file so that you don’t accidentally expose your credentials to the public.
Update kubernetes/registry-k8s/application-configmap.yml
with the OIDC configuration from the .okta.env
file. The Spring Cloud Config server reads from this file and shares the values with the gateway and microservices.
data:
application.yml: |-
...
spring:
security:
oauth2:
client:
provider:
oidc:
issuer-uri: https://<your-okta-domain>/oauth2/default
registration:
oidc:
client-id: <client-id>
client-secret: <client-secret>
Next, configure the JHipster Registry to use OIDC for authentication. Modify kubernetes/registry-k8s/jhipster-registry.yml
to enable the oauth2
profile, which is preconfigured in the JHipster Registry application.
- name: SPRING_PROFILES_ACTIVE
value: prod,k8s,oauth2
The application is now ready.
If you have noticed, you are setting secrets in plain text on the application-configmap.yml
file, which is not ideal and is not a best practice for security. For the specific JHipster application, you can use the encrypt functionality provided by the JHipster Registry to encrypt the secrets. See Encrypt Your Secrets with Spring Cloud Config to learn how to do this. But that would also rely on a base64 encoded encryption key added as a Kubernetes Secret, which still can be decoded. The best way to do this would be to use AWS Secrets Manager, an external service like HashiCorp Vault, or Sealed Secrets. To learn more about these methods see Encrypt Your Kubernetes Secrets.
You are ready to deploy to our shiny new EKS cluster, but first, you need to build and push the Docker images to a container registry. You can use Amazon Elastic Container Registry (ECR) or any other container registry.
You need to build Docker images for each app. This is specific to the JHipster application used in this tutorial which uses Jib to build the images. Make sure you are logged into Docker using docker login
. Navigate to each app folder (store, invoice, product) and run the following command:
./gradlew bootJar -Pprod jib -Djib.to.image=<docker-repo-uri-or-name>/<image-name>
Image names should be store
, invoice
, and product
.
Start the deployment using the handy script provided by JHipster. You could also manually apply deployments using kubectl apply -f <file>
commands.
cd kubernetes
./kubectl-apply.sh -f
You can also run the following command to see the status of the deployments:
kubectl get pods -n jhipster
View the registry using port-forwarding as follows, and you will be able to access the application at http://localhost:8761
.
kubectl port-forward svc/jhipster-registry -n jhipster 8761
You can access the gateway application using port-forwarding as follows, and you will be able to access the application at http://localhost:8080
.
kubectl port-forward svc/store -n jhipster 8080
Or, you can access the application via the load balancer exposed. Find the external IP of the store
service by navigating to the service tab in KDash or by running the following:
kubectl get svc store -n jhipster
Navigate to the Okta Admin Console and go to Applications > Applications from left-hand navigation. Find the application you created earlier with okta apps create jhipster
and add the external IP from kubectl get svc
command to the Sign-in redirect URIs and Sign-out redirect URIs. Make sure to use the same paths as the current localhost
entries.
Now you should be able to visit the external IP of the store
service on port 8080 and see the application, and you should be able to log in using your Okta credentials.
Once you are done with the tutorial, you can delete the cluster and all the resources created using Terraform by running the following commands:
cd terraform
# The commands below might take a while to finish.
terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve
# If deleting VPC fails, then manually delete the load balancers and security groups
# for the load balancer associated with the VPC from AWS EC2 console and try again.
terraform destroy -target="module.eks_blueprints" -auto-approve
terraform destroy -target="module.vpc" -auto-approve
# cleanup anything left over.
terraform destroy -auto-approve
Original article sourced at: https://developer.okta.com
1670649420
In this Azure tutorial, we will learn about How to Use Terraform to create and manage HA AKS Kubernetes Cluster in Azure. Learn how to use Terraform to manage a highly-available Azure AKS Kubernetes cluster with Azure AD integration and Calico network policies enabled.
Azure Kubernetes Service (AKS) is a managed Kubernetes offering in Azure which lets you quickly deploy a production ready Kubernetes cluster. It allows customers to focus on application development and deployment, rather than the nitty gritties of Kubernetes cluster management. The cluster control plane is deployed and managed by Microsoft while the node and node pools where the applications are deployed, are handled by the customer.
The AKS cluster deployment can be fully automated using Terraform. Terraform enables you to safely and predictably create, change, and improve infrastructure. It also supports advanced AKS configurations, such as availability zones, Azure AD integration, and network policies for Kubernetes.
Let’s take a look at the key AKS features we’ll be covering in this article.
Ensuring high availability of deployments is a must for enterprise workloads. Azure availability zones protect resources from data center-level failures by distributing them across one or more data centers in an Azure region.
AKS clusters can also be deployed in availability zones, in which the nodes are deployed across different zones in a region. In case of a data center failure, the workloads deployed in the cluster would continue to run from nodes in a different zone, thereby protecting them from such incidents.
Overview of availability zones for AKS clusters
With identity considered the new security perimeter, customers are now opting to use Azure AD for authentication and authorization of cloud-native deployments. AKS clusters can be integrated with Azure Active Directory so that users can be granted access to namespaces in the cluster or cluster-level resources using their existing Azure AD credentials. This eliminates the need for multiple credentials when deploying and managing workloads in an AKS cluster.
This is of even greater benefit in hybrid cloud deployments, in which on-premises AD credentials are synced to Azure AD. It delivers a consistent, unified experience for authentication and authorization. Figure 1 below shows this high-level AKS authentication flow when integrated with Azure Active Directory.
Figure 1: High-level AKS authentication flow integrated with Azure AD
By default, all pods in an AKS cluster can communicate with each other without any restrictions. However, in production, customers would want to restrict this traffic for security reasons. This can be achieved by implementing network policies in a Kubernetes cluster. Network policies can be used to define a set of rules that allow or deny traffic between pods based on matching labels.
AKS supports two types of network implementations: Kubenet (basic networking) and Azure CNI (advanced networking). Customers can also choose between two types of network policies: Azure (native) or Calico network policies (open source). While Azure network policies are supported only in Azure CNI, Calico is supported in both Kubenet- and Azure CNI-based network implementations.
Following are the prerequisites for the deployment of the AKS cluster:
Azure subscription access: It is recommended that users with contributor rights run the Terraform scripts. During deployment, an additional resource group is created for the AKS nodes. Restricted permissions may lead to deployment failures.
Azure AD server and client application: OpenID Connect is used to integrate Azure Active Directory with the AKS cluster. Two Azure AD applications are required to enable this: a server application and a client application. The server application serves as the endpoint for identity requests, while the client application is used for authentication when users try to access the AKS cluster via the kubectl command. Microsoft offers a step-by-step guide for creating these Azure AD applications.
Terraform usage from Cloud Shell: Azure Cloud Shell has Terraform installed by default in the bash environment. You can use your favorite text editor like vim or use the code editor in Azure Cloud Shell to write the Terraform templates. Refer to Microsoft’s guide to get started with Terraform in Azure Cloud Shell.
To create the templates, Terraform uses HashiCorp Configuration Language (HCL), as it is designed to be both machine friendly and human readable. For a more in-depth understanding of Terraform syntax, refer to the Terraform documentation. The values that change across deployments can be defined as variables and are either provided through a variables file or during runtime when the Terraform templates are applied.
In this section, we’ll describe the relevant modules of the Terraform template to be used to create the cluster.
Note: The Terraform template as well as the variable and output files for this deployment are all available in the GitHub repository.
The following block of Terraform code should be used to create the Azure VNet and subnet, which are required for the Azure CNI network implementation:
resource "azurerm_virtual_network" "demo" {
name = "${var.prefix}-network"
location = azurerm_resource_group.demo.location
resource_group_name = azurerm_resource_group.demo.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_subnet" "demo" {
name = "${var.prefix}-akssubnet"
virtual_network_name = azurerm_virtual_network.demo.name
resource_group_name = azurerm_resource_group.demo.name
address_prefixes = ["10.1.0.0/22"]
}
var.prefix: A prefix will be defined in the Terraform variable files which is used to differentiate the deployment.
demo: This is the local name which is used by Terraform to reference the defined resources (e.g. Azure VNet and subnet). It can be renamed to suit your use case.
address_space and address_prefixes: This refers to the address space for the VNet and subnet. You can replace the values with your preferred private IP blocks.
To enable the Azure AD integration we need to provide the server application, client application, and Azure AD tenant details. The following code block should be used in the AKS cluster definition to enable RBAC for the AKS cluster and to use Azure AD for RBAC authentication.
role_based_access_control {
azure_active_directory {
client_app_id = var.client_app_id
server_app_id = var.server_app_id
server_app_secret = var.server_app_secret
tenant_id = var.tenant_id
}
enabled = true
}
var.client_app_id: This variable refers to the client app ID of the Azure AD client application which was mentioned in the prerequisites section.
var.server_app_id: This variable refers to the server app ID of the Azure AD server application which was mentioned in the prerequisites section.
var.server_app_secret: This variable refers to the secret created for the Azure AD server application.
var.tenant_id: This variable refers to the Azure AD tenant ID associated with the subscription where the cluster will be deployed. This value can be obtained from the Azure portal or through the Azure CLI.
The following Terraform code will be used in the AKS cluster definition to enable Calico network policies. Note that this can be configured only during cluster deployment and any changes will require a recreation of the cluster.
network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
network_policy = "calico"
}
network_plugin: The value should be set to azure
to use CNI networking.
load_balancer_sku: The value should be set to standard
, as we will be using virtual machine scale sets.
network_policy: The value should be set to calico
since we’ll be using Calico network policies.
The following code will be used to configure the node pools and availability zone.
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_D2_v2"
type = "VirtualMachineScaleSets"
availability_zones = ["1", "2"]
enable_auto_scaling = true
min_count = 2
max_count = 4
# Required for advanced networking
vnet_subnet_id = azurerm_subnet.demo.id
}
node_count: This refers to the initial amount of nodes to be deployed in the node pool.
vm_size: Standard_D2_v2
is used in this sample; it can be replaced with your preferred SKU.
type: This should be set to VirtualMachineScaleSets
so that the VMs can be distributed across availability zones.
availability_zones: Lists the available zones to be used.
enable_auto_scaling: This should be set to true
to enable autoscaling.
The variables min_count
and max_count
should be set to define the minimum and maximum node count within the node pool. The value here should be between 1 and 100.
Download the Terraform files from the GitHub repository to your Cloud Shell session and edit the configuration parameters in accordance with your AKS cluster deployment requirements. The guidance provided in the previous section can be used to update these values.
git clone https://github.com/coder-society/terraform-aks-azure.git
Cloning into 'terraform-aks-azure'...
remote: Enumerating objects: 12, done.
remote: Counting objects: 100% (12/12), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 12 (delta 1), reused 12 (delta 1), pack-reused 0
Unpacking objects: 100% (12/12), done.
Checking connectivity... done.
2. Go into the /terraform
directory and run the terraform init
command to initialize Terraform:
terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/azurerm versions matching "~> 2.0"...
- Installing hashicorp/azurerm v2.28.0...
- Installed hashicorp/azurerm v2.28.0 (signed by HashiCorp)
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
3. Export the Terraform variables to be used during runtime, replace the placeholders with environment-specific values. You can also define the values in the variables file.
export TF_VAR_prefix=<Environment prefix>
export TF_VAR_client_app_id=<The client app ID of the AKS client application>
export TF_VAR_server_app_id=<The server app ID of the AKS server application>
export TF_VAR_server_app_secret=<The secret created for AKS server application>
export TF_VAR_tenant_id=<The Azure AD tenant id>
4. Create the Terraform plan by executing terraform plan -out out.plan
.
terraform plan -out out.plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# azurerm_kubernetes_cluster.demo will be created
+ resource "azurerm_kubernetes_cluster" "demo" {
+ dns_prefix = "cs-aks"
+ fqdn = (known after apply)
+ id = (known after apply)
+ kube_admin_config = (known after apply)
+ kube_admin_config_raw = (sensitive value)
+ kube_config = (known after apply)
+ kube_config_raw = (sensitive value)
+ kubelet_identity = (known after apply)
+ kubernetes_version = (known after apply)
+ location = "westeurope"
+ name = "cs-aks"
+ node_resource_group = (known after apply)
+ private_cluster_enabled = (known after apply)
+ private_fqdn = (known after apply)
+ private_link_enabled = (known after apply)
+ resource_group_name = "cs-rg"
+ sku_tier = "Free"
+ tags = {
+ "Environment" = "Development"
}
+ addon_profile {
+ aci_connector_linux {
+ enabled = (known after apply)
+ subnet_name = (known after apply)
}
+ azure_policy {
+ enabled = (known after apply)
}
+ http_application_routing {
+ enabled = (known after apply)
+ http_application_routing_zone_name = (known after apply)
}
+ kube_dashboard {
+ enabled = (known after apply)
}
+ oms_agent {
+ enabled = (known after apply)
+ log_analytics_workspace_id = (known after apply)
+ oms_agent_identity = (known after apply)
}
}
+ auto_scaler_profile {
+ balance_similar_node_groups = (known after apply)
+ max_graceful_termination_sec = (known after apply)
+ scale_down_delay_after_add = (known after apply)
+ scale_down_delay_after_delete = (known after apply)
+ scale_down_delay_after_failure = (known after apply)
+ scale_down_unneeded = (known after apply)
+ scale_down_unready = (known after apply)
+ scale_down_utilization_threshold = (known after apply)
+ scan_interval = (known after apply)
}
+ default_node_pool {
+ availability_zones = [
+ "1",
+ "2",
]
+ enable_auto_scaling = true
+ max_count = 4
+ max_pods = (known after apply)
+ min_count = 2
+ name = "default"
+ node_count = 2
+ orchestrator_version = (known after apply)
+ os_disk_size_gb = (known after apply)
+ type = "VirtualMachineScaleSets"
+ vm_size = "Standard_D2_v2"
+ vnet_subnet_id = (known after apply)
}
+ identity {
+ principal_id = (known after apply)
+ tenant_id = (known after apply)
+ type = "SystemAssigned"
}
+ network_profile {
+ dns_service_ip = (known after apply)
+ docker_bridge_cidr = (known after apply)
+ load_balancer_sku = "standard"
+ network_plugin = "azure"
+ network_policy = "calico"
+ outbound_type = "loadBalancer"
+ pod_cidr = (known after apply)
+ service_cidr = (known after apply)
+ load_balancer_profile {
+ effective_outbound_ips = (known after apply)
+ idle_timeout_in_minutes = (known after apply)
+ managed_outbound_ip_count = (known after apply)
+ outbound_ip_address_ids = (known after apply)
+ outbound_ip_prefix_ids = (known after apply)
+ outbound_ports_allocated = (known after apply)
}
}
+ role_based_access_control {
+ enabled = true
+ azure_active_directory {
+ client_app_id = "f9bf8772-aaba-4773-a815-784b31f9ab8b"
+ server_app_id = "fa7775b3-ea31-4e99-92f5-8ed0bac3e6a8"
+ server_app_secret = (sensitive value)
+ tenant_id = "8f55a88a-7752-4e10-9bbb-e847ae93911d"
}
}
+ windows_profile {
+ admin_password = (sensitive value)
+ admin_username = (known after apply)
}
}
# azurerm_resource_group.demo will be created
+ resource "azurerm_resource_group" "demo" {
+ id = (known after apply)
+ location = "westeurope"
+ name = "cs-rg"
}
# azurerm_subnet.demo will be created
+ resource "azurerm_subnet" "demo" {
+ address_prefix = (known after apply)
+ address_prefixes = [
+ "10.1.0.0/22",
]
+ enforce_private_link_endpoint_network_policies = false
+ enforce_private_link_service_network_policies = false
+ id = (known after apply)
+ name = "cs-subnet"
+ resource_group_name = "cs-rg"
+ virtual_network_name = "cs-network"
}
# azurerm_virtual_network.demo will be created
+ resource "azurerm_virtual_network" "demo" {
+ address_space = [
+ "10.1.0.0/16",
]
+ guid = (known after apply)
+ id = (known after apply)
+ location = "westeurope"
+ name = "cs-network"
+ resource_group_name = "cs-rg"
+ subnet = (known after apply)
}
Plan: 4 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: out.plan
To perform exactly these actions, run the following command to apply:
terraform apply "out.plan"
5. Use the terraform apply out.plan
command to apply the plan.
Once successfully deployed, the details of the cluster, network, etc. will be shown in the command line:
terraform apply out.plan
azurerm_resource_group.demo: Creating...
azurerm_resource_group.demo: Creation complete after 1s [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg]
azurerm_virtual_network.demo: Creating...
azurerm_virtual_network.demo: Still creating... [10s elapsed]
azurerm_virtual_network.demo: Creation complete after 15s [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network]
azurerm_subnet.demo: Creating...
azurerm_subnet.demo: Creation complete after 2s [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network/subnets/cs-subnet]
azurerm_kubernetes_cluster.demo: Creating...
azurerm_kubernetes_cluster.demo: Still creating... [10s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [20s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [30s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [40s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [50s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [1m0s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [1m10s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [1m20s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [1m30s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [1m40s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [1m50s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [2m0s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [2m10s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [2m20s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [2m30s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [2m40s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [2m50s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [3m0s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [3m10s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [3m20s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [3m30s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [3m40s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [3m50s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [4m0s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [4m10s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [4m20s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [4m30s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [4m40s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [4m50s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [5m0s elapsed]
azurerm_kubernetes_cluster.demo: Still creating... [5m10s elapsed]
azurerm_kubernetes_cluster.demo: Creation complete after 5m16s [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourcegroups/cs-rg/providers/Microsoft.ContainerService/managedClusters/cs-aks]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
Outputs:
client_certificate =
kube_config = apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5akNDQXJLZ0F3SUJBZ0lSQU01eXFucWJNNmoxekFhbk8vczdIeVV3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdJQmNOTWpBd09USXlNakEwTWpJeFdoZ1BNakExTURBNU1qSXlNRFV5TWpGYQpNQTB4Q3pBSkJnTlZCQU1UQW1OaE1JSUNJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBZzhBTUlJQ0NnS0NBZ0VBCnc5WlYwd0hORkJiaURUcnZMQ1hJU2JoTHdScFZ1WVowMVJ5QzU0QWo1WUJjNzZyRXZIQVZlRXhWYlRYc0wrdDEKMVJPOG85bVY2UG01UXprUzdPb2JvNEV6ampTY2RBcjUvazJ4eC9BLzk0ZFl2Q20wWHZQaktXSWxiS08yaDBuNQp4VW03OFNsUks4RFVReUtGWmxTMEdXQ0hKbEhnNnpVZU8xMVB5MTQvaXBic1JYdTNFYVVLeWFJczgrekxSK2NNCm5tSU5wcHFPNUVNSzZKZVZKM0V0SWFDaFBNbHJiMVEzNHlZbWNKeVkyQmZaVkVvV2pRL1BMNFNFbGw0T0dxQjgKK3Rlc3I2TDBvOW5LT25LVlB0ZCtvSXllWnQ1QzBiMnJScnFDU1IyR09VZmsvMTV3emFiNTJ6M0JjempuV0VNOApnWWszZlBDU3JvNGE5a0xQVS9Udnk1UnZaUjJsc09mUWk3eGZpNm91dzJQeEkxc1ZPcmJnRWdza2o5Qmc4WnJYCk5GZjJpWlptSFM0djNDM1I4Q25NaHNRSVdiSmlDalhDclZCak1QbzVNS0xzNEF5U1M2dU1MelFtSjhNQWVZQlEKSHJrWEhZa21OeHlGMkhqSVlTcTdjZWFOVHJ1dTh2SFlOT2c3MGM5aGEvakZ0MXczWVl4N3NwbGRSRGpmZHZiQgpaeEtwbWNkUzY3RVNYT0dtTEhwZis1TTZXMVI3UWQwYk1SOGRQdFZJb1NmU2RZSFFLM0FDdUxrd1ZxOWpGMXlnCiswcklWMC9rN0F6bzNnUlVxeFBIV0twcHN2bFhaZCtsK0VqcTRLMnFMRXd4MlFOMDJHL1dGVUhFdGJEUXAvZWYKZGxod3Z0OHp1VklIbXE0ejlsMGZSOU9QaGN6UFpXR0dyWnUrTlQ2cm5RTUNBd0VBQWFNak1DRXdEZ1lEVlIwUApBUUgvQkFRREFnS2tNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnSUJBQjQ0CmRzbExhRzFrSzFHaFZiR1VEbWtWNndsTHJ1a1F3SHc2Vkt2azJ0T0UwMU1Hbng5SkJwVFFIL2FMdWYxUTN0WVAKaVhWSnp1bGsxOWg2MjUrcEs0ekpQcDZSWkhMV3htK2M0a1pLejlwV3F3RERuNmZqTVlKOEdMNVZFOUwvQUdSRgpscEI5ZTZNOVNGY20ra2lMVVlLazg3VG1YaGd4dzFXYXhtaDEwd01DNlZPOGNuZlVRQkJJQkVMVXhNZy9DRE9SCjZjSGFTdU5ETlg0UG80eFF3NnV4c0d1R2xDbHltOUt4Z2pIYjQ5SWp2NnN5clRIcGkrVkxmQ3d4TmdLZUwxZ1cKNURVR3ZoMmpoTVJqNDhyV3FoY3JjUHI4aWtoeVlwOXMwcFJYQVlUTk52SlFzaDhmYllOcTIzbDZwVW16dUV0UwpQS2J2WUJDVmxxa29ualZiRkxIeXJuRWNFbllqdXhwYy94bWYvT09keHhUZzlPaEtFQTRLRTQySFJ2cW1SZER5CkFldVhIcUxvUm54TXF1Z0JxL0tTclM2S0tjQW11eVJWdkhJL21MUlhmY1k1VThCWDBXcUF0N1lrWm54d1JnRkQKQndRcnEvdDJrUkMySSsxR1pUd2d1Y3hyc0VrYlVoVG5DaStVbjNDRXpTbmg5anBtdDBDcklYaDYzeC9LY014egpGM0ZXNWlnZDR4MHNxYk5oK3B4K1VSVUlsUmZlMUxDRWg3dGlraVRHRGtGT05EQXBSQUMycnUrM0I5TlpsR0hZCm9jWS9tcTlpdUtXTUpobjFHeXJzWGZLZXQrakliZzhUNzZzaEora0E4djU3VmdBdlRRSEh1YTg2SHl6d1d2Z0QKQ2ZaZFhpeURvZGlWRXhPNGlGaG00T1dhZld5U0ltSUsrOCs1Z2daZwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://cs-aks-f9e8be99.hcp.westeurope.azmk8s.io:443
name: cs-aks
contexts:
- context:
cluster: cs-aks
user: clusterUser_cs-rg_cs-aks
name: cs-aks
current-context: cs-aks
kind: Config
preferences: {}
users:
- name: clusterUser_cs-rg_cs-aks
user:
auth-provider:
config:
apiserver-id: fa7775b3-ea31-4e99-92f5-8ed0bac3e6a8
client-id: f9bf8772-aaba-4773-a815-784b31f9ab8b
config-mode: "1"
environment: AzurePublicCloud
tenant-id: 8f55a88a-7752-4e10-9bbb-e847ae93911d
name: azure
6. Browse to the resource pool in the Azure portal to view the cluster and the network which was created by the deployment:
7. Retrieve the admin kubeconfig using the Azure cli:
az aks get-credentials --resource-group $prefix-rg --name $prefix-aks --admin --overwrite-existing
8. Run the following command to list the nodes and availability zone configuration:
kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"
Name: aks-default-36042037-vmss000000
failure-domain.beta.kubernetes.io/zone=westeurope-1
Name: aks-default-36042037-vmss000001
failure-domain.beta.kubernetes.io/zone=westeurope-2
failure-domain.beta.kubernetes.io/zone
is a label associated with Kubernetes nodes that indicates the zone in which it is deployed. The output shows that the nodes are deployed across two availability zones in Western Europe.
GROUP_ID=$(az ad group create --display-name dev --mail-nickname dev --query objectId -o tsv)
2. Retrieve the resource ID of the AKS cluster
AKS_ID=$(az aks show \
--resource-group $prefix-rg \
--name $prefix-aks \
--query id -o tsv)
3. Create an Azure role assignment so that any member of the dev
group can use kubectl
to interact with the Kubernetes cluster.
az role assignment create \
--assignee $GROUP_ID \
--role "Azure Kubernetes Service Cluster User Role" \
--scope $AKS_ID
4. Add yourself to the dev
AD group.
USER_ID=$(az ad signed-in-user show --query objectId -o tsv)
az ad group member add --group dev --member-id $USER_ID
5. With the admin kubeconfig, create a development
and production
Kubernetes namespace. kubectl create namespace development kubectl create namespace production
6. Replace the groupObjectId
with the resource ID of the previously created group and apply the rolebinding.yaml
file.
sed -i '' "s/groupObjectId/$GROUP_ID/g" rolebinding.yaml
kubectl apply -f rolebinding.yaml
7. Run the following command to get the cluster credentials before testing Azure AD integration.
az aks get-credentials --resource-group $prefix-rg --name $prefix-aks --overwrite-existing
8. Run the following kubectl command to see the Azure AD integration in action:
kubectl get pods --namespace development
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code DSFV9H98W to authenticate.
No resources found in development namespace.
Enter the code in the device login page followed by your Azure AD login credentials:
Note that only users in the dev
group will be able to log in through this process.
4. Try to access resources in the production
namespace:
kubectl get pods --namespace production
Error from server (Forbidden): pods is forbidden: User "kentaro@codersociety.com" cannot list resource "pods" in API group "" in the namespace "production"
kubectl apply -f httpbin.yaml --namespace development
2. Create a network policy which restricts all inbound access to the deployment using k8s/networkpolicy.yaml. We only allow network access from pods with the label app: webapp
.
kubectl apply -f networkpolicy.yaml --namespace development
3. Create a new pod and test access to the httpbin service. From the command prompt of the pod, try to access the httpbin service over port 8000. The access will timeout. You can type “exit” to exit and delete the pod after testing.
kubectl run --rm -it --image=alpine frontend --namespace development
If you don't see a command prompt, try pressing enter.
/ # wget --timeout=2 http://httpbin:8000
Connecting to httpbin:8000 (10.0.233.179:8000)
wget: download timed out
/ # exit
4. Create a new test pod, but this time with labels matching the ingress rules. Then run the wget command to check access to httpbin service over port 8000.
kubectl run --rm -it --image=alpine frontend --labels app=webapp --namespace development
If you don't see a command prompt, try pressing enter.
/ # wget --timeout=2 http://httpbin:8000
Connecting to httpbin:8000 (10.0.233.179:8000)
saving to 'index.html'
index.html 100% |************************************************************************************| 9593 0:00:00 ETA
'index.html' saved
You can see that it's now possible to retrieve the index.html which shows that the pod can access the httpbin service, since the pod labels match the ingress policy.
Go into the terraform
directory and run terraform destroy
. You get asked if you really want to delete the resources where you confirm by entering yes
.
terraform destroy
azurerm_resource_group.demo: Refreshing state... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg]
azurerm_virtual_network.demo: Refreshing state... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network]
azurerm_subnet.demo: Refreshing state... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network/subnets/cs-subnet]
azurerm_kubernetes_cluster.demo: Refreshing state... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourcegroups/cs-rg/providers/Microsoft.ContainerService/managedClusters/cs-aks]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# azurerm_kubernetes_cluster.demo will be destroyed
- resource "azurerm_kubernetes_cluster" "demo" {
- api_server_authorized_ip_ranges = [] -> null
- dns_prefix = "cs-aks" -> null
- enable_pod_security_policy = false -> null
- fqdn = "cs-aks-f9e8be99.hcp.westeurope.azmk8s.io" -> null
- id = "/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourcegroups/cs-rg/providers/Microsoft.ContainerService/managedClusters/cs-aks" -> null
- kube_admin_config = [
- {
- client_certificate = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUUvVENDQXVXZ0F3SUJBZ0lSQUxFazBXdFZWb1dFS0Nra21aeGFaRkl3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdIaGNOTWpBd09USXlNakEwTWpJeFdoY05Nakl3T1RJeU1qQTFNakl4V2pBdwpNUmN3RlFZRFZRUUtFdzV6ZVhOMFpXMDZiV0Z6ZEdWeWN6RVZNQk1HQTFVRUF4TU1iV0Z6ZEdWeVkyeHBaVzUwCk1JSUNJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBZzhBTUlJQ0NnS0NBZ0VBd0xnVWRpZjJ2ZVFraUdXaDVNbS8KUUdNSzJ2MFMxcDFJKzBQTmRVWVNZSko0dWVGNFpQVUZFcEJyMm9WR2txU29QNUIrNHRlY2RTVkgrL1FvaWI2RQpQYVJwTUNrYnhBSFZPZ1RTcGdJWkliQlp3WjRGamJHbXRtS0lSV1RyR25lcUZSOFFMUHlGdG5TODlNVktUdEU2CjZyOWc0ODRJVTJaM3Q1Wlc4UTdHdFBnU2p4VWQrYWtkTHJZMVUyNzU3TEQyZXBsWlA4UVU3bTRJQ3pXWDFQWWIKMTFTQjJyQjhMc1hpYWRQS2gyQW1tV2t2Y2JkVzFrQW5zWnJ3OHQ2elZIbytlUk5OWWpLdHNXczJ4TXFvdVduVQpJR0UwcjRCaDhXbTFDanluSnNGTXk5S056c1FGV3IzM0hieWU1b00zQU1YN0VaQ1JxRlpLWjhaa2NWbTFaaXdTCi9hNjlJYkVTbmYrbGszbkh4QzJFQjdoVTlQc1FvYkFPUU91MUprbWZMaGsxYTF4N1B2Y0lXbm0rTnAzdko1dlQKMk9mcW1uLzJ3VGFwMkUwSlVpWHFjV3h6YVN6bEpBbXJVdkt3TXZZcWtHVmdRdHk4OGZUM0J4NmFVWUxwQXFVRQpXZG1kWGhFN1BaWXlnT1pFWHIvUVJkSW5BcWZLNmFiWEduc3h2QVFPYVFMWTlBRHk3NkNWem9CamhpdHh5bjFzCm4rU3VQK3l4Y3I3Tmp2VUtHK2g2UzlzMm56eDd5Wm9rUENMSXF4Sm5xdTU4UzhkM1lPR0cvTmVTTll2aGhmNkMKVjFWdEdHaWFsTGFqUGNCd0h1cTFuR0U1WEkvaXlWQk5pdGtmMWk5alMrNnFvU2VsbUJyMUV3YmI1OWlvekUxRApXRnloQWZWNWQ3MEx4QnBheDYrc1M5OENBd0VBQWFNMU1ETXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkCkpRUU1NQW9HQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnSUIKQUs4UWJLUFZyL3QwVlRncS8vSG12UmplKzlYVUNUaUgxRXpDTkFRTVVkcFpjcXJWYXhIMlF3ZVM2SVkrRGU3ZApBUUhYMTM1M0JUc3I0L0VNazVaUTJIaUdjMFZCRzRKSE1NYmNkcjRWb0EwdjhiUmxJSFZRQ2E1QWZhOUFRQTFYCjgvT0pFMUVLeWtFU21jQThkQnA0YTh5cGcwbkZFQzNPQlFlcWx1MjFFK2swU3NKT1VScHU3WE4wUVVWV2NnSFcKNFNOWWtzV2JmRkN6ekpCWmthTmdRUnlhZDJVYWNTQ0REM1ZiNWVHYTljTmpYMzgvbkdZUFhQNlQzbzZFQkJnMApxM0ZZaW9TN0lPZ0xuVSt3cld5b2hXeGNyM2ZUK0J5MW5UOG9oeVVFNDVONm4wMldwclVlLzJGUU9ERjZUOWcvCkkxemhWOVlJbW5wcDMvY1BrZldKYjFFK0hTMU04V284dUdCa25xaVpJVzFaM1NJVFVReVlqWUJkY2grNnVSTWgKMEdxakRHNXViZU1sU0pONkNSUHBoMVpzOERLSjN2MjFUdkYwTjJaL3UyTHU2TGdkaWZLWUZvbStmME0vVUJFUQpRNjVsVHhNeUs5MXZzNDRaMWQ3ODNxcG5ab2RaUWo5VTBqWGVtWnZyMFRtWlh2UHhSdHByTWpXaVNDZVZWNjdSCjFldGQ3NWJiMmFldUF1V2VmYVZscmorc0dRUU1IN1JuUUh1WXhOaktNKzRxU2Z3eHhyeXQ0Q0VUcThFT1grRlcKOFllTEsxTlErOXRaTXZTQ1NwdmRZUnV2NlUvdHVDUnZZTUVLMnMwN1NtdjRDZWFqU25hbW53S0JZZUZld0dNNQpIL0VkSVRwekRQQjVoQkFWeEVlb0czU3FENHo4anpQS1daVWpXY3pTbDZTbwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
- client_key = "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS1FJQkFBS0NBZ0VBd0xnVWRpZjJ2ZVFraUdXaDVNbS9RR01LMnYwUzFwMUkrMFBOZFVZU1lKSjR1ZUY0ClpQVUZFcEJyMm9WR2txU29QNUIrNHRlY2RTVkgrL1FvaWI2RVBhUnBNQ2tieEFIVk9nVFNwZ0laSWJCWndaNEYKamJHbXRtS0lSV1RyR25lcUZSOFFMUHlGdG5TODlNVktUdEU2NnI5ZzQ4NElVMlozdDVaVzhRN0d0UGdTanhVZAorYWtkTHJZMVUyNzU3TEQyZXBsWlA4UVU3bTRJQ3pXWDFQWWIxMVNCMnJCOExzWGlhZFBLaDJBbW1Xa3ZjYmRXCjFrQW5zWnJ3OHQ2elZIbytlUk5OWWpLdHNXczJ4TXFvdVduVUlHRTByNEJoOFdtMUNqeW5Kc0ZNeTlLTnpzUUYKV3IzM0hieWU1b00zQU1YN0VaQ1JxRlpLWjhaa2NWbTFaaXdTL2E2OUliRVNuZitsazNuSHhDMkVCN2hVOVBzUQpvYkFPUU91MUprbWZMaGsxYTF4N1B2Y0lXbm0rTnAzdko1dlQyT2ZxbW4vMndUYXAyRTBKVWlYcWNXeHphU3psCkpBbXJVdkt3TXZZcWtHVmdRdHk4OGZUM0J4NmFVWUxwQXFVRVdkbWRYaEU3UFpZeWdPWkVYci9RUmRJbkFxZksKNmFiWEduc3h2QVFPYVFMWTlBRHk3NkNWem9CamhpdHh5bjFzbitTdVAreXhjcjdOanZVS0craDZTOXMybnp4Nwp5Wm9rUENMSXF4Sm5xdTU4UzhkM1lPR0cvTmVTTll2aGhmNkNWMVZ0R0dpYWxMYWpQY0J3SHVxMW5HRTVYSS9pCnlWQk5pdGtmMWk5alMrNnFvU2VsbUJyMUV3YmI1OWlvekUxRFdGeWhBZlY1ZDcwTHhCcGF4NitzUzk4Q0F3RUEKQVFLQ0FnQnZBaG1YTGRIczg1c3ZqZ3RBOUF6Y0U3RFBEM05vZDlUd0Z0QWtPeWFleGdBUVloV3RZWTE0Y2dRTwpMVExIaVZ6NHNFekdjWmZIeXArNk81dVdMRTJVREQ0aTVhcitybWVhTWVqOGdyempNT2VpcFZsaGt2RUtvWnNKCkRlWjJxbk1vRTJxSDN6Vk9NZFFkMGY3Smc2L0NSRmFWSWJxZC82bjU3L2xJaFZCa01YalBQa1N6Nkh2TXlsdlIKSVYySXZ5NWExRFlhaXVIYnJUbW82MGYzL1lOdjkxZU5GcGVSZ1o2M2dxMW9hVFFTcmdvTUlLVSthRm4wN2VEWQpwUHI3TUNjSUt0d3FNakxtdlhFZ3pmTitTYjFNb1hGdG5pL01sUzBaSm5MSjJoSllYWUlkbGIvWDB4Q2k2bUZGCk9sUFdlRFAwbkNlcXBYbmFhT2EyZkF3SFBGLzdEQzlRUkRDNjNTZ2w3YVkxSDMzVyt0Q2JNYjQzMTZLMEJWbVoKaUpNZDduM3FNU0hQc0FPeVl4UDNrazM1RHBCdlBvc1RkbG13cGxHK0psdHY1eDAxdWU4L3lGRWtRTExITkNiVgpiaTBndTlndExpRGtwZ0M4RFo4eWljNTRzTS9oaHNFUTExY0FQT2xNRlJDTzJGZ2cyNFRrVWthZkxHb210TWk4CktDWnJpUWM2VjVVZWdITnVpeWJKN0lUeVhuNGJvU2piTExHZTNGZVlQUWQ5S2VBam03T0hsVXVTemVBZDZrY3cKaG9zV2ViWlB3ZzlNaEVuUllicmxWWEZyREJEWE9lVW12elRGOGc1Q09xK243M3JLSTNlZzRTNTk1eWpYRm1RTwpTRTd3Nk52bnlEQ1RvbDU1THY4MytkaUFNbzlqR0haSzdyRndGcGZMN0J5dG1qVGJzUUtDQVFFQTUzUlhJR1JBCnd6WTR2UWR2WjdnVkc4Z2tqWTNhNTV6UEdsR2dabGtqdWxVVUZyMmJtWWZXMlFudTVsQ2t2RURjQThxSUdOdXcKck1QNHFxSldNM3pZTVQvS05LajhnOHR1ZDhOQVNaK3FCQ0p1WlZBYlJaNGlaQXZYVmVZRTZhYXU5MjB6NVoxZQpVdTZFVmQxeC9XNit2RFp5ZU5XbFIvZmc1MG44K2tiTExteG1Kc2haeEZxRHI1eTNGeEdXK1ZqS2grTjA5Rm5OClhLWkx2KzdXMU1heG1sNkNrbzVtUVZvbW9GU28zTlVNeTRXRU9GREllTlN6bkZ4ZE9WKzVGc2k5UzFOSnZjRWsKZ2tiMUtVL0x5ZHJ1ditmNVZqOTcvOG9Rcm5tWFE5NGhQbW1veTJTckZ3Z0ZiWmE3M0l0S1N5N2I5bjdIa3RJRQpFZ3J2ZlpUcml2L1ZNd0tDQVFFQTFTZ2c4MktmS29uanc4b2VTdDZVZDRhdkJSdUdYREdlcndYZGRsdm1oc2VMCkUxNDdKaVJ5Um9rYTRGMkNMWkI2NGtySURnUkZxaDdxZjJLNVFTQk1QWXFhRTQ3WDlzZGlqZ0lpMUR6SThSVm4KejlyWGxtdzcycjhWN084Q2RPY1ZySEtuSXVXVkJSd2h5clJsbGM1L1h1eG1oaGZwRmN5d3ZEUlgyKzdsbUV0aQpMSlFKVm56NGY0MkdnbGFHWEt5SmwyaGtRdDE4NWZ0WGhUengrell3bGZHWi9qbjlvUE9Qa3JpZU1yb2xqbzJoClBjNktmd2hEZTlKNTA4R3FNNEJ6MEczQjQ0RkNDMWhIMHd4SnY2aE81RFkzL3FGQ25hQ1U0SStlUjBQVzZXY0MKZ1RsRGxMeUtwd0czZzBzcENySCtrREpLYVBqUGVhS1FsTmtianJTV3BRS0NBUUVBbHV3SHUvbGpPV2RyeStiRApRQkNLd3hqWXJPem81c29iU1lBY1pXQ09xWHU4bzY5emZNTlUxeVZoQUJGcHVjOVpKNmV5NHZLdDI1blYxZjRRCjAzWCt5dTViZmNjTEVTMWZsUHhlT1NQQml2eWdtN09HZFBqT1dBcFltWXhwZTZuU3dVZ1Y1UTJlYWRsWnRWdTIKYnBqK0NtQStlSWhuUSt4Z1hMQ2tJdFp5dW95NGQyV0JFMFlxUkNLZVNJNlJzWG15WnJWc2w4RE81akVSaDgvSAppZXNkK0JqVWI1Z25HVW9ka2NKaWNjMENrTnM1QWplNjRQOWhOdjRMVTlRVkxzUXFtcWx1bGlzUkVWb1BscWFQCnJjbnlrSFJFNDNaMTlxN2QvY2NQV1pQSWZaZ01Gc1JIdzdiWlEwSmNzVXlxWHlmcENteFUybW5UZWFoanpiR0QKZlptZ2ZRS0NBUUFIc3RaVjFBY0pvMGROcC93bUdobmtvMEdvL3BDQXZlNE1SanIwYm1kS0VPVHVBeVpCdjJrOQpNUEIra0FJR29VUSs3aEtCcHhmWkNCclNGUCs1NFcrL2ZVVUpWY3hwQmxTQjZvUFZoSWlCWkpPR1IxSW9CYXEzCndOVUs1S3NEQytHVmcrS1RlUlZEeFB0WGRlS0JZWjdxRDhHNE1CN2tBYXVVY0pPSHh2NFYzUXNqcndrVFRab3cKQ1MyRmdaaUN1bHlSMGx4a3FkazcrVEwxQmZsN2FENmkrOEhqRTdjY1hBK2diZmlRdm5aaXlxeTdMYjJFendpWQo3VVluSnNSOTdiTEJJV1d5VU5YUTBSUnZBKytaODNzOTlOTmE1L29lOVZETE40U3c4RHRQM0wrVGFUME9ueXltCjBZSU9ST1dybERnc2Z4Uis3QldhUUF2V3hHeWhYOVpkQW9JQkFRQ3E1ZmdIZ3VsdU5wSU5Pc045cFRQL2JTYXgKRjVaR1FRQXEyZTNnSW42SEUzRkYzWnlmVGJESkRPbHNIMktoQzlVZ1daT01YUVU1d1N0RVlDZ25ZZkw2azBVQgpnZi95aVh6NXd6bW1CREkzNU1hY25McW1XeVJKS3N2aWpEY0VjTmZ2V1JORlVKK2RvVEFoQ1VnY2dRbTRzWm9PClloMHJKay9CenFvQXBETzhiOGpFMDI2T2dRd0tVUlBlekhUcjdLb0pyckQyWTlDQ0RFd29UdjFQUDI5dDNJcWUKbVdYMjJWcTJqWGdmSDJLcVBVdWk1VGtQZlphM29zR29RaysrQlFFaGxTQXhMKzdSRnFkbjduYkRVQzYvUmM4MgowdlFKS3BiVnlRVDRnbUtEd1BhazNGZGdvQkFHQ3J3NTh5ekRacEFTQXVrZzRpbzVNRHJGbERNR2dSZ1IKLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K"
- cluster_ca_certificate = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5akNDQXJLZ0F3SUJBZ0lSQU01eXFucWJNNmoxekFhbk8vczdIeVV3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdJQmNOTWpBd09USXlNakEwTWpJeFdoZ1BNakExTURBNU1qSXlNRFV5TWpGYQpNQTB4Q3pBSkJnTlZCQU1UQW1OaE1JSUNJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBZzhBTUlJQ0NnS0NBZ0VBCnc5WlYwd0hORkJiaURUcnZMQ1hJU2JoTHdScFZ1WVowMVJ5QzU0QWo1WUJjNzZyRXZIQVZlRXhWYlRYc0wrdDEKMVJPOG85bVY2UG01UXprUzdPb2JvNEV6ampTY2RBcjUvazJ4eC9BLzk0ZFl2Q20wWHZQaktXSWxiS08yaDBuNQp4VW03OFNsUks4RFVReUtGWmxTMEdXQ0hKbEhnNnpVZU8xMVB5MTQvaXBic1JYdTNFYVVLeWFJczgrekxSK2NNCm5tSU5wcHFPNUVNSzZKZVZKM0V0SWFDaFBNbHJiMVEzNHlZbWNKeVkyQmZaVkVvV2pRL1BMNFNFbGw0T0dxQjgKK3Rlc3I2TDBvOW5LT25LVlB0ZCtvSXllWnQ1QzBiMnJScnFDU1IyR09VZmsvMTV3emFiNTJ6M0JjempuV0VNOApnWWszZlBDU3JvNGE5a0xQVS9Udnk1UnZaUjJsc09mUWk3eGZpNm91dzJQeEkxc1ZPcmJnRWdza2o5Qmc4WnJYCk5GZjJpWlptSFM0djNDM1I4Q25NaHNRSVdiSmlDalhDclZCak1QbzVNS0xzNEF5U1M2dU1MelFtSjhNQWVZQlEKSHJrWEhZa21OeHlGMkhqSVlTcTdjZWFOVHJ1dTh2SFlOT2c3MGM5aGEvakZ0MXczWVl4N3NwbGRSRGpmZHZiQgpaeEtwbWNkUzY3RVNYT0dtTEhwZis1TTZXMVI3UWQwYk1SOGRQdFZJb1NmU2RZSFFLM0FDdUxrd1ZxOWpGMXlnCiswcklWMC9rN0F6bzNnUlVxeFBIV0twcHN2bFhaZCtsK0VqcTRLMnFMRXd4MlFOMDJHL1dGVUhFdGJEUXAvZWYKZGxod3Z0OHp1VklIbXE0ejlsMGZSOU9QaGN6UFpXR0dyWnUrTlQ2cm5RTUNBd0VBQWFNak1DRXdEZ1lEVlIwUApBUUgvQkFRREFnS2tNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnSUJBQjQ0CmRzbExhRzFrSzFHaFZiR1VEbWtWNndsTHJ1a1F3SHc2Vkt2azJ0T0UwMU1Hbng5SkJwVFFIL2FMdWYxUTN0WVAKaVhWSnp1bGsxOWg2MjUrcEs0ekpQcDZSWkhMV3htK2M0a1pLejlwV3F3RERuNmZqTVlKOEdMNVZFOUwvQUdSRgpscEI5ZTZNOVNGY20ra2lMVVlLazg3VG1YaGd4dzFXYXhtaDEwd01DNlZPOGNuZlVRQkJJQkVMVXhNZy9DRE9SCjZjSGFTdU5ETlg0UG80eFF3NnV4c0d1R2xDbHltOUt4Z2pIYjQ5SWp2NnN5clRIcGkrVkxmQ3d4TmdLZUwxZ1cKNURVR3ZoMmpoTVJqNDhyV3FoY3JjUHI4aWtoeVlwOXMwcFJYQVlUTk52SlFzaDhmYllOcTIzbDZwVW16dUV0UwpQS2J2WUJDVmxxa29ualZiRkxIeXJuRWNFbllqdXhwYy94bWYvT09keHhUZzlPaEtFQTRLRTQySFJ2cW1SZER5CkFldVhIcUxvUm54TXF1Z0JxL0tTclM2S0tjQW11eVJWdkhJL21MUlhmY1k1VThCWDBXcUF0N1lrWm54d1JnRkQKQndRcnEvdDJrUkMySSsxR1pUd2d1Y3hyc0VrYlVoVG5DaStVbjNDRXpTbmg5anBtdDBDcklYaDYzeC9LY014egpGM0ZXNWlnZDR4MHNxYk5oK3B4K1VSVUlsUmZlMUxDRWg3dGlraVRHRGtGT05EQXBSQUMycnUrM0I5TlpsR0hZCm9jWS9tcTlpdUtXTUpobjFHeXJzWGZLZXQrakliZzhUNzZzaEora0E4djU3VmdBdlRRSEh1YTg2SHl6d1d2Z0QKQ2ZaZFhpeURvZGlWRXhPNGlGaG00T1dhZld5U0ltSUsrOCs1Z2daZwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
- host = "https://cs-aks-f9e8be99.hcp.westeurope.azmk8s.io:443"
- password = "15f169a920129ead802a0de7c5be9500abf964051850b652acf411ab96e587c4e9a9255b155dc56225245f84bcacfab5682d74b60bb097716fca8a14431e8c5e"
- username = "clusterAdmin_cs-rg_cs-aks"
},
] -> null
- kube_admin_config_raw = (sensitive value)
- kube_config = [
- {
- client_certificate = ""
- client_key = ""
- cluster_ca_certificate = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5akNDQXJLZ0F3SUJBZ0lSQU01eXFucWJNNmoxekFhbk8vczdIeVV3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdJQmNOTWpBd09USXlNakEwTWpJeFdoZ1BNakExTURBNU1qSXlNRFV5TWpGYQpNQTB4Q3pBSkJnTlZCQU1UQW1OaE1JSUNJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBZzhBTUlJQ0NnS0NBZ0VBCnc5WlYwd0hORkJiaURUcnZMQ1hJU2JoTHdScFZ1WVowMVJ5QzU0QWo1WUJjNzZyRXZIQVZlRXhWYlRYc0wrdDEKMVJPOG85bVY2UG01UXprUzdPb2JvNEV6ampTY2RBcjUvazJ4eC9BLzk0ZFl2Q20wWHZQaktXSWxiS08yaDBuNQp4VW03OFNsUks4RFVReUtGWmxTMEdXQ0hKbEhnNnpVZU8xMVB5MTQvaXBic1JYdTNFYVVLeWFJczgrekxSK2NNCm5tSU5wcHFPNUVNSzZKZVZKM0V0SWFDaFBNbHJiMVEzNHlZbWNKeVkyQmZaVkVvV2pRL1BMNFNFbGw0T0dxQjgKK3Rlc3I2TDBvOW5LT25LVlB0ZCtvSXllWnQ1QzBiMnJScnFDU1IyR09VZmsvMTV3emFiNTJ6M0JjempuV0VNOApnWWszZlBDU3JvNGE5a0xQVS9Udnk1UnZaUjJsc09mUWk3eGZpNm91dzJQeEkxc1ZPcmJnRWdza2o5Qmc4WnJYCk5GZjJpWlptSFM0djNDM1I4Q25NaHNRSVdiSmlDalhDclZCak1QbzVNS0xzNEF5U1M2dU1MelFtSjhNQWVZQlEKSHJrWEhZa21OeHlGMkhqSVlTcTdjZWFOVHJ1dTh2SFlOT2c3MGM5aGEvakZ0MXczWVl4N3NwbGRSRGpmZHZiQgpaeEtwbWNkUzY3RVNYT0dtTEhwZis1TTZXMVI3UWQwYk1SOGRQdFZJb1NmU2RZSFFLM0FDdUxrd1ZxOWpGMXlnCiswcklWMC9rN0F6bzNnUlVxeFBIV0twcHN2bFhaZCtsK0VqcTRLMnFMRXd4MlFOMDJHL1dGVUhFdGJEUXAvZWYKZGxod3Z0OHp1VklIbXE0ejlsMGZSOU9QaGN6UFpXR0dyWnUrTlQ2cm5RTUNBd0VBQWFNak1DRXdEZ1lEVlIwUApBUUgvQkFRREFnS2tNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnSUJBQjQ0CmRzbExhRzFrSzFHaFZiR1VEbWtWNndsTHJ1a1F3SHc2Vkt2azJ0T0UwMU1Hbng5SkJwVFFIL2FMdWYxUTN0WVAKaVhWSnp1bGsxOWg2MjUrcEs0ekpQcDZSWkhMV3htK2M0a1pLejlwV3F3RERuNmZqTVlKOEdMNVZFOUwvQUdSRgpscEI5ZTZNOVNGY20ra2lMVVlLazg3VG1YaGd4dzFXYXhtaDEwd01DNlZPOGNuZlVRQkJJQkVMVXhNZy9DRE9SCjZjSGFTdU5ETlg0UG80eFF3NnV4c0d1R2xDbHltOUt4Z2pIYjQ5SWp2NnN5clRIcGkrVkxmQ3d4TmdLZUwxZ1cKNURVR3ZoMmpoTVJqNDhyV3FoY3JjUHI4aWtoeVlwOXMwcFJYQVlUTk52SlFzaDhmYllOcTIzbDZwVW16dUV0UwpQS2J2WUJDVmxxa29ualZiRkxIeXJuRWNFbllqdXhwYy94bWYvT09keHhUZzlPaEtFQTRLRTQySFJ2cW1SZER5CkFldVhIcUxvUm54TXF1Z0JxL0tTclM2S0tjQW11eVJWdkhJL21MUlhmY1k1VThCWDBXcUF0N1lrWm54d1JnRkQKQndRcnEvdDJrUkMySSsxR1pUd2d1Y3hyc0VrYlVoVG5DaStVbjNDRXpTbmg5anBtdDBDcklYaDYzeC9LY014egpGM0ZXNWlnZDR4MHNxYk5oK3B4K1VSVUlsUmZlMUxDRWg3dGlraVRHRGtGT05EQXBSQUMycnUrM0I5TlpsR0hZCm9jWS9tcTlpdUtXTUpobjFHeXJzWGZLZXQrakliZzhUNzZzaEora0E4djU3VmdBdlRRSEh1YTg2SHl6d1d2Z0QKQ2ZaZFhpeURvZGlWRXhPNGlGaG00T1dhZld5U0ltSUsrOCs1Z2daZwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
- host = "https://cs-aks-f9e8be99.hcp.westeurope.azmk8s.io:443"
- password = ""
- username = "clusterUser_cs-rg_cs-aks"
},
] -> null
- kube_config_raw = (sensitive value)
- kubelet_identity = [
- {
- client_id = "fdf74b6c-68fa-4f69-b379-30375026cfee"
- object_id = "4ac0dde7-d3be-4975-b7a2-9b455cb80bbd"
- user_assigned_identity_id = "/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourcegroups/MC_cs-rg_cs-aks_westeurope/providers/Microsoft.ManagedIdentity/userAssignedIdentities/cs-aks-agentpool"
},
] -> null
- kubernetes_version = "1.17.11" -> null
- location = "westeurope" -> null
- name = "cs-aks" -> null
- node_resource_group = "MC_cs-rg_cs-aks_westeurope" -> null
- private_cluster_enabled = false -> null
- private_link_enabled = false -> null
- resource_group_name = "cs-rg" -> null
- sku_tier = "Free" -> null
- tags = {
- "Environment" = "Development"
} -> null
- addon_profile {
- aci_connector_linux {
- enabled = false -> null
}
- azure_policy {
- enabled = false -> null
}
- http_application_routing {
- enabled = false -> null
}
- kube_dashboard {
- enabled = false -> null
}
- oms_agent {
- enabled = false -> null
- oms_agent_identity = [] -> null
}
}
- auto_scaler_profile {
- balance_similar_node_groups = false -> null
- max_graceful_termination_sec = "600" -> null
- scale_down_delay_after_add = "10m" -> null
- scale_down_delay_after_delete = "10s" -> null
- scale_down_delay_after_failure = "3m" -> null
- scale_down_unneeded = "10m" -> null
- scale_down_unready = "20m" -> null
- scale_down_utilization_threshold = "0.5" -> null
- scan_interval = "10s" -> null
}
- default_node_pool {
- availability_zones = [
- "1",
- "2",
] -> null
- enable_auto_scaling = true -> null
- enable_node_public_ip = false -> null
- max_count = 4 -> null
- max_pods = 30 -> null
- min_count = 2 -> null
- name = "default" -> null
- node_count = 2 -> null
- node_labels = {} -> null
- node_taints = [] -> null
- orchestrator_version = "1.17.11" -> null
- os_disk_size_gb = 128 -> null
- tags = {} -> null
- type = "VirtualMachineScaleSets" -> null
- vm_size = "Standard_D2_v2" -> null
- vnet_subnet_id = "/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network/subnets/cs-subnet" -> null
}
- identity {
- principal_id = "72559f39-68db-424c-b18c-b7cadb893314" -> null
- tenant_id = "8f55a88a-7752-4e10-9bbb-e847ae93911d" -> null
- type = "SystemAssigned" -> null
}
- network_profile {
- dns_service_ip = "10.0.0.10" -> null
- docker_bridge_cidr = "172.17.0.1/16" -> null
- load_balancer_sku = "Standard" -> null
- network_plugin = "azure" -> null
- network_policy = "calico" -> null
- outbound_type = "loadBalancer" -> null
- service_cidr = "10.0.0.0/16" -> null
- load_balancer_profile {
- effective_outbound_ips = [
- "/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/MC_cs-rg_cs-aks_westeurope/providers/Microsoft.Network/publicIPAddresses/490fd61a-dc70-4104-bed3-533a69c723f3",
] -> null
- idle_timeout_in_minutes = 0 -> null
- managed_outbound_ip_count = 1 -> null
- outbound_ip_address_ids = [] -> null
- outbound_ip_prefix_ids = [] -> null
- outbound_ports_allocated = 0 -> null
}
}
- role_based_access_control {
- enabled = true -> null
- azure_active_directory {
- admin_group_object_ids = [] -> null
- client_app_id = "f9bf8772-aaba-4773-a815-784b31f9ab8b" -> null
- managed = false -> null
- server_app_id = "fa7775b3-ea31-4e99-92f5-8ed0bac3e6a8" -> null
- server_app_secret = (sensitive value)
- tenant_id = "8f55a88a-7752-4e10-9bbb-e847ae93911d" -> null
}
}
- windows_profile {
- admin_username = "azureuser" -> null
}
}
# azurerm_resource_group.demo will be destroyed
- resource "azurerm_resource_group" "demo" {
- id = "/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg" -> null
- location = "westeurope" -> null
- name = "cs-rg" -> null
- tags = {} -> null
}
# azurerm_subnet.demo will be destroyed
- resource "azurerm_subnet" "demo" {
- address_prefix = "10.1.0.0/22" -> null
- address_prefixes = [
- "10.1.0.0/22",
] -> null
- enforce_private_link_endpoint_network_policies = false -> null
- enforce_private_link_service_network_policies = false -> null
- id = "/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network/subnets/cs-subnet" -> null
- name = "cs-subnet" -> null
- resource_group_name = "cs-rg" -> null
- service_endpoints = [] -> null
- virtual_network_name = "cs-network" -> null
}
# azurerm_virtual_network.demo will be destroyed
- resource "azurerm_virtual_network" "demo" {
- address_space = [
- "10.1.0.0/16",
] -> null
- dns_servers = [] -> null
- guid = "89184266-b65e-423f-a0af-12ed8c2131f2" -> null
- id = "/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network" -> null
- location = "westeurope" -> null
- name = "cs-network" -> null
- resource_group_name = "cs-rg" -> null
- subnet = [
- {
- address_prefix = "10.1.0.0/22"
- id = "/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network/subnets/cs-subnet"
- name = "cs-subnet"
- security_group = ""
},
] -> null
- tags = {} -> null
}
Plan: 0 to add, 0 to change, 4 to destroy.
Changes to Outputs:
- kube_config = <<~EOT
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5akNDQXJLZ0F3SUJBZ0lSQU01eXFucWJNNmoxekFhbk8vczdIeVV3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdJQmNOTWpBd09USXlNakEwTWpJeFdoZ1BNakExTURBNU1qSXlNRFV5TWpGYQpNQTB4Q3pBSkJnTlZCQU1UQW1OaE1JSUNJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBZzhBTUlJQ0NnS0NBZ0VBCnc5WlYwd0hORkJiaURUcnZMQ1hJU2JoTHdScFZ1WVowMVJ5QzU0QWo1WUJjNzZyRXZIQVZlRXhWYlRYc0wrdDEKMVJPOG85bVY2UG01UXprUzdPb2JvNEV6ampTY2RBcjUvazJ4eC9BLzk0ZFl2Q20wWHZQaktXSWxiS08yaDBuNQp4VW03OFNsUks4RFVReUtGWmxTMEdXQ0hKbEhnNnpVZU8xMVB5MTQvaXBic1JYdTNFYVVLeWFJczgrekxSK2NNCm5tSU5wcHFPNUVNSzZKZVZKM0V0SWFDaFBNbHJiMVEzNHlZbWNKeVkyQmZaVkVvV2pRL1BMNFNFbGw0T0dxQjgKK3Rlc3I2TDBvOW5LT25LVlB0ZCtvSXllWnQ1QzBiMnJScnFDU1IyR09VZmsvMTV3emFiNTJ6M0JjempuV0VNOApnWWszZlBDU3JvNGE5a0xQVS9Udnk1UnZaUjJsc09mUWk3eGZpNm91dzJQeEkxc1ZPcmJnRWdza2o5Qmc4WnJYCk5GZjJpWlptSFM0djNDM1I4Q25NaHNRSVdiSmlDalhDclZCak1QbzVNS0xzNEF5U1M2dU1MelFtSjhNQWVZQlEKSHJrWEhZa21OeHlGMkhqSVlTcTdjZWFOVHJ1dTh2SFlOT2c3MGM5aGEvakZ0MXczWVl4N3NwbGRSRGpmZHZiQgpaeEtwbWNkUzY3RVNYT0dtTEhwZis1TTZXMVI3UWQwYk1SOGRQdFZJb1NmU2RZSFFLM0FDdUxrd1ZxOWpGMXlnCiswcklWMC9rN0F6bzNnUlVxeFBIV0twcHN2bFhaZCtsK0VqcTRLMnFMRXd4MlFOMDJHL1dGVUhFdGJEUXAvZWYKZGxod3Z0OHp1VklIbXE0ejlsMGZSOU9QaGN6UFpXR0dyWnUrTlQ2cm5RTUNBd0VBQWFNak1DRXdEZ1lEVlIwUApBUUgvQkFRREFnS2tNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnSUJBQjQ0CmRzbExhRzFrSzFHaFZiR1VEbWtWNndsTHJ1a1F3SHc2Vkt2azJ0T0UwMU1Hbng5SkJwVFFIL2FMdWYxUTN0WVAKaVhWSnp1bGsxOWg2MjUrcEs0ekpQcDZSWkhMV3htK2M0a1pLejlwV3F3RERuNmZqTVlKOEdMNVZFOUwvQUdSRgpscEI5ZTZNOVNGY20ra2lMVVlLazg3VG1YaGd4dzFXYXhtaDEwd01DNlZPOGNuZlVRQkJJQkVMVXhNZy9DRE9SCjZjSGFTdU5ETlg0UG80eFF3NnV4c0d1R2xDbHltOUt4Z2pIYjQ5SWp2NnN5clRIcGkrVkxmQ3d4TmdLZUwxZ1cKNURVR3ZoMmpoTVJqNDhyV3FoY3JjUHI4aWtoeVlwOXMwcFJYQVlUTk52SlFzaDhmYllOcTIzbDZwVW16dUV0UwpQS2J2WUJDVmxxa29ualZiRkxIeXJuRWNFbllqdXhwYy94bWYvT09keHhUZzlPaEtFQTRLRTQySFJ2cW1SZER5CkFldVhIcUxvUm54TXF1Z0JxL0tTclM2S0tjQW11eVJWdkhJL21MUlhmY1k1VThCWDBXcUF0N1lrWm54d1JnRkQKQndRcnEvdDJrUkMySSsxR1pUd2d1Y3hyc0VrYlVoVG5DaStVbjNDRXpTbmg5anBtdDBDcklYaDYzeC9LY014egpGM0ZXNWlnZDR4MHNxYk5oK3B4K1VSVUlsUmZlMUxDRWg3dGlraVRHRGtGT05EQXBSQUMycnUrM0I5TlpsR0hZCm9jWS9tcTlpdUtXTUpobjFHeXJzWGZLZXQrakliZzhUNzZzaEora0E4djU3VmdBdlRRSEh1YTg2SHl6d1d2Z0QKQ2ZaZFhpeURvZGlWRXhPNGlGaG00T1dhZld5U0ltSUsrOCs1Z2daZwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://cs-aks-f9e8be99.hcp.westeurope.azmk8s.io:443
name: cs-aks
contexts:
- context:
cluster: cs-aks
user: clusterUser_cs-rg_cs-aks
name: cs-aks
current-context: cs-aks
kind: Config
preferences: {}
users:
- name: clusterUser_cs-rg_cs-aks
user:
auth-provider:
config:
apiserver-id: fa7775b3-ea31-4e99-92f5-8ed0bac3e6a8
client-id: f9bf8772-aaba-4773-a815-784b31f9ab8b
config-mode: "1"
environment: AzurePublicCloud
tenant-id: 8f55a88a-7752-4e10-9bbb-e847ae93911d
name: azure
EOT -> null
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
azurerm_kubernetes_cluster.demo: Destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourcegroups/cs-rg/providers/Microsoft.ContainerService/managedClusters/cs-aks]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 1m0s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 1m10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 1m20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 1m30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 1m40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 1m50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 2m0s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 2m10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 2m20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 2m30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 2m40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 2m50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 3m0s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 3m10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 3m20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 3m30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 3m40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 3m50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 4m0s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 4m10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 4m20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 4m30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 4m40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 4m50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 5m0s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 5m10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 5m20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 5m30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 5m40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 5m50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 6m0s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 6m10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 6m20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 6m30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 6m40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 6m50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 7m0s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 7m10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 7m20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 7m30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 7m40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 7m50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 8m0s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 8m10s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 8m20s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 8m30s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 8m40s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 8m50s elapsed]
azurerm_kubernetes_cluster.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...ontainerService/managedClusters/cs-aks, 9m0s elapsed]
azurerm_kubernetes_cluster.demo: Destruction complete after 9m5s
azurerm_subnet.demo: Destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network/subnets/cs-subnet]
azurerm_subnet.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...lNetworks/cs-network/subnets/cs-subnet, 10s elapsed]
azurerm_subnet.demo: Destruction complete after 11s
azurerm_virtual_network.demo: Destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg/providers/Microsoft.Network/virtualNetworks/cs-network]
azurerm_virtual_network.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-...oft.Network/virtualNetworks/cs-network, 10s elapsed]
azurerm_virtual_network.demo: Destruction complete after 11s
azurerm_resource_group.demo: Destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg]
azurerm_resource_group.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg, 10s elapsed]
azurerm_resource_group.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg, 20s elapsed]
azurerm_resource_group.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg, 30s elapsed]
azurerm_resource_group.demo: Still destroying... [id=/subscriptions/a7a456e9-0307-4196-b786-5a33ce52b5fd/resourceGroups/cs-rg, 40s elapsed]
azurerm_resource_group.demo: Destruction complete after 48s
Destroy complete! Resources: 4 destroyed.
Availability zones, Azure AD integration, and Calico network policies all help to achieve high availability, seamless identity management, and advanced network traffic management for applications deployed in AKS.
Availability zones help protect your workloads from Azure data center failures and ensure production system resiliency. Azure AD integration is crucial for unifying the identity management of the cluster, as customers can continue to leverage their investments in Azure AD for managing AKS workloads as well. Calico network policy helps enhance security posture of line-of-business applications deployed in AKS by ensuring that only legit traffic reaches your workloads.
Original article sourced at: https://codersociety.com
1670428020
Hi all, In this blog we are going to create AWS SNS(Simple Notification Service) using Terraform. We will be going to create SNS topic, then we will also be creating topic subscription using Terraform.
Firstly we will be providing the provider details. Here i am going with aws provider with region us-east-1.
provider "aws" {
region = "us-east-1"
}
We are going to use one terraform resource known as aws_sns_topic to create a topic whith the name of your choice. In this blog i am going with very simple topic for sns, with no extra optional arguments. In the coming blogs we will be seeing it in detail.
resource "aws_sns_topic" "user_updates" {
name = "test-2"
}
Here you can see above, i am using name test-2. you can use according to yourself. This will create a standard topic.
Here in the above picture, you can see that the code has created our sns topic, but still the topic subscription is missing. In the next resource below we are going to create subscription also.
Now we will be using aws_sns_topic_subscription resource to add subscription to your topic. Here i am using email as protocol you can many from the list as shown below.
Now, we will have to know the arn of our topic to tell this resource that we need this subscription for the specified topic. For that we are going to use data source.
data "aws_sns_topic" "example" {
name = "test-2"
}
resource "aws_sns_topic_subscription" "user_updates_sqs_target" {
topic_arn = data.aws_sns_topic.example.arn
protocol = "email"
endpoint = "apriyadarshi407@gmail.com"
}
The data source will grab the arn of your newly created sns topic. As soon as it grabs the arn we will be using the output in our topic_arn argument as shown in above code.
Now you will get a mail to your specified end-point for subscription approval. Approve it and refresh your aws console you will get to see that your subscription has been added successfully.
In this blog we have learned how to create aws sns using terraform. We created topic and then grab the arn using data source and finally created subscription to that topic also.
To know more about SNS visit https://aws.amazon.com/sns/
Visit https://blog.knoldus.com/tag/aws/ for more blogs on AWS
Original article source at: https://blog.knoldus.com/