1655116649
Stack Up
Stack Up is a simple deployment tool that performs given set of commands on multiple hosts in parallel. It reads Supfile, a YAML configuration file, which defines networks (groups of hosts), commands and targets.
Demo
Note: Demo is based on this example Supfile.
Installation
$ go get -u github.com/pressly/sup/cmd/sup
Usage
$ sup [OPTIONS] NETWORK COMMAND [...]
Option | Description |
---|---|
-f Supfile | Custom path to Supfile |
-e , --env=[] | Set environment variables |
--only REGEXP | Filter hosts matching regexp |
--except REGEXP | Filter out hosts matching regexp |
--debug , -D | Enable debug/verbose mode |
--disable-prefix | Disable hostname prefix |
--help , -h | Show help/usage |
--version , -v | Print version |
A group of hosts.
# Supfile
networks:
production:
hosts:
- api1.example.com
- api2.example.com
- api3.example.com
staging:
# fetch dynamic list of hosts
inventory: curl http://example.com/latest/meta-data/hostname
$ sup production COMMAND
will run COMMAND on api1
, api2
and api3
hosts in parallel.
A shell command(s) to be run remotely.
# Supfile
commands:
restart:
desc: Restart example Docker container
run: sudo docker restart example
tail-logs:
desc: Watch tail of Docker logs from all hosts
run: sudo docker logs --tail=20 -f example
$ sup staging restart
will restart all staging Docker containers in parallel.
$ sup production tail-logs
will tail Docker logs from all production containers in parallel.
serial: N
constraints a command to be run on N
hosts at a time at maximum. Rolling Update for free!
# Supfile
commands:
restart:
desc: Restart example Docker container
run: sudo docker restart example
serial: 2
$ sup production restart
will restart all Docker containers, two at a time at maximum.
once: true
constraints a command to be run only on one host. Useful for one-time tasks.
# Supfile
commands:
build:
desc: Build Docker image and push to registry
run: sudo docker build -t image:latest . && sudo docker push image:latest
once: true # one host only
pull:
desc: Pull latest Docker image from registry
run: sudo docker pull image:latest
$ sup production build pull
will build Docker image on one production host only and spread it to all hosts.
Runs command always on localhost.
# Supfile
commands:
prepare:
desc: Prepare to upload
local: npm run build
Uploads files/directories to all remote hosts. Uses tar
under the hood.
# Supfile
commands:
upload:
desc: Upload dist files to all hosts
upload:
- src: ./dist
dst: /tmp/
Do you want to interact with multiple hosts at once? Sure!
# Supfile
commands:
bash:
desc: Interactive Bash on all hosts
stdin: true
run: bash
$ sup production bash
#
# type in commands and see output from all hosts!
# ^C
Passing prepared commands to all hosts:
$ echo 'sudo apt-get update -y' | sup production bash
# or:
$ sup production bash <<< 'sudo apt-get update -y'
# or:
$ cat <<EOF | sup production bash
sudo apt-get update -y
date
uname -a
EOF
# Supfile
commands:
exec:
desc: Exec into Docker container on all hosts
stdin: true
run: sudo docker exec -i $CONTAINER bash
$ sup production exec
ps aux
strace -p 1 # trace system calls and signals on all your production hosts
Target is an alias for multiple commands. Each command will be run on all hosts in parallel, sup
will check return status from all hosts, and run subsequent commands on success only (thus any error on any host will interrupt the process).
# Supfile
targets:
deploy:
- build
- pull
- migrate-db-up
- stop-rm-run
- health
- slack-notify
- airbrake-notify
$ sup production deploy
is equivalent to
$ sup production build pull migrate-db-up stop-rm-run health slack-notify airbrake-notify
Supfile
See example Supfile.
# Supfile
---
version: 0.4
# Global environment variables
env:
NAME: api
IMAGE: example/api
networks:
local:
hosts:
- localhost
staging:
hosts:
- stg1.example.com
production:
hosts:
- api1.example.com
- api2.example.com
commands:
echo:
desc: Print some env vars
run: echo $NAME $IMAGE $SUP_NETWORK
date:
desc: Print OS name and current date/time
run: uname -a; date
targets:
all:
- echo
- date
$SUP_HOST
- Current host.$SUP_NETWORK
- Current network.$SUP_USER
- User who invoked sup command.$SUP_TIME
- Date/time of sup command invocation.$SUP_ENV
- Environment variables provided on sup command invocation. You can pass $SUP_ENV
to another sup
or docker
commands in your Supfile.Running sup from Supfile
Supfile doesn't let you import another Supfile. Instead, it lets you run sup
sub-process from inside your Supfile. This is how you can structure larger projects:
./Supfile
./database/Supfile
./services/scheduler/Supfile
Top-level Supfile calls sup
with Supfiles from sub-projects:
restart-scheduler:
desc: Restart scheduler
local: >
sup -f ./services/scheduler/Supfile $SUP_ENV $SUP_NETWORK restart
db-up:
desc: Migrate database
local: >
sup -f ./database/Supfile $SUP_ENV $SUP_NETWORK up
Common SSH Problem
if for some reason sup doesn't connect and you get the following error,
connecting to clients failed: connecting to remote host failed: Connect("myserver@xxx.xxx.xxx.xxx"): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
it means that your ssh-agent
dosen't have access to your public and private keys. in order to fix this issue, follow the below instructions:
ssh-agent
ssh-add -l
if you see something like The agent has no identities.
it means that you need to manually add your key to ssh-agent
. in order to do that, run the following command
ssh-add ~/.ssh/id_rsa
you should now be able to use sup with your ssh key.
Development
fork it, hack it..
$ make build
create new Pull Request
We'll be happy to review & accept new Pull Requests!
Author: Pressly
Source Code: https://github.com/pressly/sup
License: MIT license
1602964260
Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.
According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.
(State of Kubernetes and Container Security, 2020)
And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.
(State of Kubernetes and Container Security, 2020)
#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml
1627009531
John Crestani created the Super Affiliate System, an ideal program to equip people with information and skills to achieve affiliate marketing success. In this system, learners need to participate in a module-based learning setting that will help them get started with affiliate marketing by using a simplified system that consists of a single website, buyers, and regular quality traffic. Go through the super affiliate system review to find out more!
John Crestanis’s extensive knowledge and skills in this industry set the Super Affiliate System far apart from competitor affiliate marketing systems. But is the Super Affiliate Commission System a genuine deal? Is it worth investing in? Today, in this Super Affiliate System review, we will take a look at what the system requires and decide whether it’s a real deal affiliate marketing enthusiasts should invest in.
This is a complete training course that assists people in becoming successful affiliate marketers. The guide uses videos to lead you through the tools and processes you need to become a super affiliate marketer. The program creator has shared thriving, in-depth strategies to give you a life of freedom if you pay heed to them.
The Super Affiliate System is a training guide to equip you with knowledge and skills in the industry. The system will also allow a list of tools needed for affiliate marketers to fast-track their potential.
There are a few pros and cons that will enlighten beginner affiliates on whether to consider this system or not. Let’s have a look at them one by one:
Pros:-
The system has extensive and informative, easy to follow modules.
The system is designed in a user-friendly manner, especially for beginners.
Equipped with video tutorials to quickly guide you through the process.
The system gives affiliates niche information to provide them with a competitive advantage.
Equipped with revision sections, weekly questions, and daily assignments to help you grasp all the course ideas.
The system extends clients to a 24/7 support system.
It allows clients to have monthly payment plans that can be suitable for those who can’t bear the price of a single down payment. It offers
clients a lot of bonuses.
Clients are allowed a 60-day Super Affiliate System refund guarantee.
Cons:-
It’s very expensive.
Limited coverage of affiliate networks and niches.
John Crestani, a 29-year-old expert in affiliate marketing from Santa Monica, California, is the program’s creator. The veteran left out of college and chose to earn money online since there are low job prospects. He failed several times, striving to make ends meet for quite some time until he successfully built a successful affiliate site dealing with health-related products.
He is currently a seven-figure person making more than $500 per month. His remarkable success in affiliate marketing has made him a featured in Yahoo Finance, Inc., Forbes, Business Insider, and Home Business magazine.
With the enormous success he has seen in affiliate marketing, John has designed an easy-to-follow guide to provide people with the skills to make money as an affiliate marketer. He has described all the strategies and tools he used to lead him to success.
The system accommodates affiliate marketers with in-depth details on how to develop successful affiliate networks. The Super Affiliate System review has a positive impact on different affiliate marketers who have tried it and noticed impressive results. But then, does it work?
The program doesn’t promise you overnight riches; it demands work and application to perform it. After finishing the Super Affiliate System online video training course, attaining success requires you to put John’s strategies into practice. A lot of commitment, hard work, and time are required in order to become a successful affiliate marketer.
As its name suggests, the Super Affiliate System is there to make you a super affiliate. John himself is an experienced affiliate, and he has accumulated all the necessary tools to achieve success in training others to become super affiliates. The Super Affiliate Network System members’ area has outlined everything that the veteran affiliate used to make millions as an affiliate.
The guide will help you set up campaigns, traffic resources, essential tools you need as an affiliate, and the veteran affiliate networks to achieve success.
Most amateur affiliates usually get frustrated as they might demand time to start making money. Those who succeed in getting little coins mainly do the following to earn;
They first become Super Affiliate System affiliates.
They promote the Super Affiliate System in multiple ways.
They convert the marketing leads they get into sales.
They receive a commission on every sale they make.
Affiliate marketing involves trading other people’s products and earning commissions from the sales you make. It’s an online business that can be done either with free or paid traffic. With the Super Affiliate System, one of the basic teachings you’ll get in the guide is how to make money by promoting the course itself using paid traffic Facebook ads.
The system is amongst the most comprehensive affiliate marketing courses on the market. The Super Affiliate System comprises more than 50 hours of content that takes about six weeks to complete. The Super Affiliate System also includes several video lectures and tutorials alongside several questions and homework assignments to test its retention.
This program aims to provide affiliates with comprehensive ideas and tactics to become successful affiliate marketers. Therefore, their online video training course is comprehensive. Below are areas of information included within the modules;
Facebook ads
Native ads
Website creation
Google ads
Social ads
Niche selection
YouTube ads
Content creation
Scaling
Tracking and testing
Affiliate networks
Click funnels
Advanced strategies
Besides the extensive information the creator has presented on these topics, he also went an extra mile to review the complete material and also guide marketers through the course.
There are a number of digital products out there that provide solutions to techniques to earn money online. But not all options offer real value for what you want. John gives people a Super Affiliate System free webinar to allow them to learn what the system entails. It will help if you spare time to watch it, as it takes 90 minutes to get through.
Below is a brief guide to who this system is for:
It is for beginners who can equip themselves with appropriate affiliate marketing skills. People who are still employed and want to have an alternative earning scheme fit here.
The system is also suitable for entrepreneurs who need to learn to earn money online, mainly using paid ads.
The Super Affiliate System also suits anyone who is looking for another alternative stream of income.
Making money online has many advantages at large. You have the flexibility to work from any place, in the comfort of your home, with just an internet connection. Even though John has stated that there are no special skills needed to achieve success in affiliate marketing, there are little basics necessary to keep you on track.
Having a proper mindset is also vital to attaining success in affiliate marketing. So, affiliates who believe in the system working for them need to be dedicated, focused, and committed.
They incorporate;
Keep in mind that you have more than $895 in advertisements to get started. Furthermore, set aside a couple of dollars so that you keep on the right track.
There is also additional software you require to get started. It needs an extra of between $80 and $100 a month to get it.
If you are interested in joining this big team, you have to get into the Super Affiliate System on the official website, superaffiliatesystem.org, and get it from there. You have to pay their set fees to get their courses and other new materials within their learning scope.
It depends on an individual whether the system is worth it or not. The system is worth the money for serious people who want to go deep into an affiliate marketing career and have the time to put the Super Affiliate System strategies into practice. Super Affiliate System Review, Is it worth your money?
But people who also look forward to becoming rich overnight need to get off as this is not your way. Hard work and commitment are paramount to getting everything that works best for you.
#super affiliate system review #super affiliate system #super affiliate system 3 #super affiliate system 3.0 review #super affiliate system pro #super affiliate system john crestani
1598001060
The DevOps methodology, a software and team management approach defined by the portmanteau of Development and Operations, was first coined in 2009 and has since become a buzzword concept in the IT field.
DevOps has come to mean many things to each individual who uses the term as DevOps is not a singularly defined standard, software, or process but more of a culture. Gartner defines DevOps as:
“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture), and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology — especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”
As you can see from the above definition, DevOps is a multi-faceted approach to the Software Development Life Cycle (SDLC), but its main underlying strength is how it leverages technology and software to streamline this process. So with the right approach to DevOps, notably adopting its philosophies of co-operation and implementing the right tools, your business can increase deployment frequency by a factor of 30 and lead times by a factor of 8000 over traditional methods, according to a CapGemini survey.
This list is designed to be as comprehensive as possible. The article comprises both very well established tools for those who are new to the DevOps methodology and those tools that are more recent releases to the market — either way, there is bound to be a tool on here that can be an asset for you and your business. For those who already live and breathe DevOps, we hope you find something that will assist you in your growing enterprise.
With such a litany of tools to choose from, there is no “right” answer to what tools you should adopt. No single tool will cover all your needs and will be deployed across a variety of development and Operational teams, so let’s break down what you need to consider before choosing what tool might work for you.
With all that in mind, I hope this selection of tools will aid you as your business continues to expand into the DevOps lifestyle.
Continuous Integration and Delivery
AWS CloudFormation is an absolute must if you are currently working, or planning to work, in the AWS Cloud. CloudFormation allows you to model your AWS infrastructure and provision all your AWS resources swiftly and easily. All of this is done within a JSON or YAML template file and the service comes with a variety of automation features ensuring your deployments will be predictable, reliable, and manageable.
Link: https://aws.amazon.com/cloudformation/
Azure Resource Manager (ARM) is Microsoft’s answer to an all-encompassing IAC tool. With its ARM templates, described within JSON files, Azure Resource Manager will provision your infrastructure, handle dependencies, and declare multiple resources via a single template.
Link: https://azure.microsoft.com/en-us/features/resource-manager/
Much like the tools mentioned above, Google Cloud Deployment Manager is Google’s IAC tool for the Google Cloud Platform. This tool utilizes YAML for its config files and JINJA2 or PYTHON for its templates. Some of its notable features are synchronistic deployment and ‘preview’, allowing you an overhead view of changes before they are committed.
Link: https://cloud.google.com/deployment-manager/
Terraform is brought to you by HashiCorp, the makers of Vault and Nomad. Terraform is vastly different from the above-mentioned tools in that it is not restricted to a specific cloud environment, this comes with increased benefits for tackling complex distributed applications without being tied to a single platform. And much like Google Cloud Deployment Manager, Terraform also has a preview feature.
Link: https://www.terraform.io/
Chef is an ideal choice for those who favor CI/CD. At its heart, Chef utilizes self-described recipes, templates, and cookbooks; a collection of ready-made templates. Cookbooks allow for consistent configuration even as your infrastructure rapidly scales. All of this is wrapped up in a beautiful Ruby-based DSL pie.
Link: https://www.chef.io/products/chef-infra/
#tools #devops #devops 2020 #tech tools #tool selection #tool comparison
1659396000
Humidifier is a ruby tool for managing AWS CloudFormation stacks. You can use it to build and manage stacks programmatically or you can use it as a command line tool to manage stacks through configuration files.
Add this line to your application's Gemfile:
gem 'humidifier'
And then execute:
$ bundle
Or install it yourself as:
$ gem install humidifier
Stacks are represented by the Humidifier::Stack
class. You can set any of the top-level JSON attributes (such as name
and description
) through the initializer.
Resources are represented by an exact mapping from AWS
resource names to Humidifier
resources names (e.g. AWS::EC2::Instance
becomes Humidifier::EC2::Instance
). Resources have accessors for each JSON attribute. Each attribute can also be set through the initialize
, update
, and update_attribute
methods.
The below example will create a stack with two resources, a loader balancer and an auto scaling group. It then deploys the new stack and pauses execution until the stack is finished being created.
stack = Humidifier::Stack.new(name: 'Example-Stack')
stack.add(
'LoaderBalancer',
Humidifier::ElasticLoadBalancing::LoadBalancer.new(
scheme: 'internal',
listeners: [
{
load_balancer_port: 80,
protocol: 'http',
instance_port: 80,
instance_protocol: 'http'
}
]
)
)
stack.add(
'AutoScalingGroup',
Humidifier::AutoScaling::AutoScalingGroup.new(
min_size: '1',
max_size: '20',
availability_zones: ['us-east-1a'],
load_balancer_names: [Humidifier.ref('LoadBalancer')]
)
)
stack.deploy_and_wait
Once stacks have the appropriate resources, you can query AWS to handle all stack CRUD operations. The operations themselves are intuitively named (i.e. #create
, #update
, #delete
). There are also convenience methods for validating a stack body (#valid?
), checking the existence of a stack (#exists?
), and creating or updating based on existence (#deploy
).
There are additionally four functions on Humidifier::Stack
that support waiting for execution in AWS to finish. They all have non-blocking corollaries, and are named after them. They are: #create_and_wait
, #update_and_wait
, #delete_and_wait
, and #deploy_and_wait
.
You can use CFN intrinsic functions and references using Humidifier.fn.[name]
and Humidifier.ref
. They will build appropriate structures that know how to be dumped to CFN syntax.
Instead of immediately pushing your changes to CloudFormation, Humidifier also supports change sets. Change sets are a powerful feature that allow you to see the changes that will be made before you make them. To read more about change sets see the announcement article. To use them in Humidifier, Humidifier::Stack
has the #create_change_set
and #deploy_change_set
methods. The #create_change_set
method will create a change set on the stack. The #deploy_change_set
method will create a change set if the stack currently exists, and otherwise will create the stack.
To see the template body, you can check the #to_cf
method on stacks, resources, fns, and refs. All of them will output a hash of what will be uploaded (except the stack, which will output a string representation).
Humidifier itself contains a registry of all possible resources that it supports. You can access it with Humidifier::registry
which is a hash of AWS resource name pointing to the class.
Resources have an ::aws_name
method to see how AWS references them. They also contain a ::props
method that contains a hash of the name that Humidifier uses to reference the prop pointing to the appropriate prop object.
When templates are especially large (larger than 51,200 bytes), they cannot be uploaded directly through the AWS SDK. You can configure Humidifier
to seamlessly upload the templates to S3 and reference them using an S3 URL instead by:
Humidifier.configure do |config|
config.s3_bucket = 'my.s3.bucket'
config.s3_prefix = 'my-prefix/' # optional
end
You can force a stack to upload its template to S3 regardless of the size of the template. This is a useful option if you're going to be deploying multiple copies of a template or if you want a backup. You can set this option on a per-stack basis:
stack.deploy(force_upload: true)
or globally, by setting the configuration option:
Humidifier.configure do |config|
config.force_upload = true
end
Humidifier
can also be used as a CLI for managing resources through configuration files. For a step-by-step guide, read on, but if you'd like to see a working example, check out the example directory.
To get started, build a ruby script (for example humidifier
) that executes the Humidifier::CLI
class, like so:
#!/usr/bin/env ruby
require 'humidifier'
Humidifier.configure do |config|
# optional, defaults to the current working directory, so that all of the
# directories from the location that you run the CLI are assumed to contain
# resource specifications
config.stack_path = 'stacks'
# optional, a default prefix to use before deploying to AWS
config.stack_prefix = 'humidifier-'
# specifies that `users.yml` files contain specifications for `AWS::IAM::User`
# resources
config.map :users, to: 'IAM::User'
end
Humidifier::CLI.start(ARGV)
Inside of the stacks
directory configured above, create a subdirectory for each CloudFormation stack that you want to deploy. With the above configuration, we can create YAML files in the form of users.yml
for each stack, which will specify IAM users to create. The file format looks like the below:
EngUser:
path: /humidifier/
user_name: EngUser
groups:
- Engineering
- Testing
- Deployment
AdminUser:
path: /humidifier/
user_name: AdminUser
groups:
- Management
- Administration
The top-level keys are the logical resource names that will be displayed in the CloudFormation screen. They point to a map of key/value pairs that will be passed on to humidifier
. Any humidifier
(and therefore any CloudFormation) attribute may be specified. For more information on CloudFormation templates and which attributes may be specified, see both the humidifier
docs and the CloudFormation docs.
Oftentimes, specifying these attributes can become repetitive, e.g., each user should automatically receive the same "path" attribute. Other times, you may want custom logic to execute depending on which AWS environment you're running in. Finally, you may want to reference resources in the same or other stacks.
Humidifier
's solution for this is to allow customized "mapper" classes to take the user-provided attributes and transform them into the attributes that CloudFormation expects. Consider the following example for mapping a user:
class UserMapper < Humidifier::Config::Mapper
GROUPS = {
'eng' => %w[Engineering Testing Deployment],
'admin' => %w[Management Administration]
}
defaults do |logical_name|
{ path: '/humidifier/', user_name: logical_name }
end
attribute :group do |group|
groups = GROUPS[group]
groups.any? ? { groups: GROUPS[group] } : {}
end
end
Humidifier.configure do |config|
config.map :users, to: 'IAM::User', using: UserMapper
end
This means that by default, all entries in the users.yml
files will get a /humidifier/
path, the user_name
attribute will be set based on the logical name that was provided for the resource, and you can additionally specify a group
attribute, even though it is not native to CloudFormation. With this group
attribute, it will actually map to the groups
attribute that CloudFormation expects.
With this new mapper in place, we can simplify our YAML file to:
EngUser:
group: eng
AdminUser:
group: admin
Now that you've configured your CLI, your resources, and your mappers, you can use the CLI to display, validate, and deploy your infrastructure to CloudFormation. Run your script without any arguments to get the help message and explanations for each command.
Each command has an --aws-profile
(or -p
) option for specifying which profile to authenticate against when querying AWS. You should ensure that this profile has the correct permissions for creating whatever resources are going to part of your stack. You can also rely on the AWS_*
environment variables, or the EC2 instance profile if you're deploying from an instance. For more information, see the AWS docs under the "Configuration" section.
Below are the list of commands and some of their options.
change [?stack]
Creates a change set for either the specified stack or all stacks in the repo. The change set represents the changes between what is currently deployed versus the resources represented by the configuration.
deploy [?stack] [*parameters]
Creates or updates (depending on if the stack already exists) one or all stacks in the repo.
The deploy
command also allows a --prefix
command line argument that will override the default prefix (if one is configured) for the stack that is being deployed. This is especially useful when you're deploying multiple copies of the same stack (for instance, multiple autoscaling groups) that have different purposes or semantically mean newer versions of resources.
display [stack] [?pattern]
Displays the specified stack in JSON format on the command line. If you optionally pass a pattern argument, it will filter the resources down to just ones whose names match the given pattern.
stacks
Displays the names of all of the stacks that humidifier
is managing.
upgrade
Downloads the latest CloudFormation resource specification. Periodically AWS will update the file that humidifier
is based on, in which case the attributes of the resources that were changed could change. This gem usually stays relatively in sync, but if you need to use the latest specs and this gem has not yet released a new version containing them, then you can run this command to download the latest specs onto your system.
upload [?stack]
Upload one or all stacks in the repo to S3 for reference later. Note that this must be combined with the humidifier
s3_bucket
configuration option.
validate [?stack]
Validate that one or all stacks in the repo are properly configured and using values that CloudFormation understands.
version
Output the version of Humidifier
as well as the version of the CloudFormation resource specification that you are using.
CloudFormation template parameters can be specified by having a special parameters.yml
file in your stack directory. This file should contain a YAML-encoded object whose keys are the names of the parameters and whose values are the parameter configuration (using the same underscore paradigm as humidifier
resources for specifying configuration).
You can pass values to the CLI deploy command after the stack name on the command line as in:
humidifier deploy foobar Param1=Foo Param2=Bar
Those parameters will get passed in as values when the stack is deployed.
A couple of convenient shortcuts are built into humidifier
so that writing templates and mappers both can be more concise.
There are a lot of properties in the AWS CloudFormation resource specification that are simply pointers to other entities within the AWS ecosystem. For example, an AWS::EC2::VPCGatewayAttachment
entity has a VpcId
property that represents the ID of the associated AWS::EC2::VPC
.
Because this pattern is so common, humidifier
detects all properties ending in Id
and allows you to specify them without the suffix. If you choose to use this format, humidifier
will automatically turn that value into a CloudFormation resource reference.
A lot of the time, mappers that you create will not be overly complicated, especially if you're using automatic id properties. So, the config.map
method optionally takes a block, and allows you to specify the mapper inline. This is recommended for mappers that aren't too complicated as to warrant their own class (for instance, for testing purposes). An example of this using the UserMapper
from above is below:
Humidifier.configure do |config|
config.map :users, to: 'IAM::User' do
GROUPS = {
'eng' => %w[Engineering Testing Deployment],
'admin' => %w[Management Administration]
}
defaults do |logical_name|
{ path: '/humidifier/', user_name: logical_name }
end
attribute :group do |group|
groups = GROUPS[group]
groups.any? ? { groups: GROUPS[group] } : {}
end
end
end
AWS allows cross-stack references through the intrinsic Fn::ImportValue
function. You can take advantage of this with humidifier
by using the export: true
option on resources in your stacks. For instance, if in one stack you have a subnet that you need to reference in another, you could (stacks/vpc/subnets.yml
):
ProductionPrivateSubnet2a:
vpc: ProductionVPC
cidr_block: 10.0.0.0/19
availability_zone: us-west-2a
export: true
ProductionPrivateSubnet2b:
vpc: ProductionVPC
cidr_block: 10.0.64.0/19
availability_zone: us-west-2b
export: true
ProductionPrivateSubnet2c:
vpc: ProductionVPC
cidr_block: 10.0.128.0/19
availability_zone: us-west-2c
export: true
And then in another stack, you could reference those values (stacks/rds/db_subnets_groups.yml
):
ProductionDBSubnetGroup:
db_subnet_group_description: Production DB private subnet group
subnets:
- ProductionPrivateSubnet2a
- ProductionPrivateSubnet2b
- ProductionPrivateSubnet2c
Within the configuration, you would specify to use the Fn::ImportValue
function like so:
Humidifier.configure do |config|
config.stack_path = 'stacks'
config.map :subnets, to: 'EC2::Subnet'
config.map :db_subnet_groups, to: 'RDS::DBSubnetGroup' do
attribute :subnets do |subnet_names|
subnet_ids =
subnet_names.map do |subnet_name|
Humidifier.fn.import_value(subnet_name)
end
{ subnet_ids: subnet_ids }
end
end
end
If you specify export: true
it will by default export a reference to the resource listed in the stack. You can also choose to export a different attribute by specifying the attribute as the value to export. For example, if we were creating instance profiles and wanted to export the Arn
so that it could be referenced by an instance later, we could:
APIRoleInstanceProfile:
depends_on: APIRole
roles:
- APIRole
export: Arn
To get started, ensure you have ruby installed, version 2.4 or later. From there, install the bundler
gem: gem install bundler
and then bundle install
in the root of the repository.
The default rake task runs the tests. Styling is governed by rubocop. The docs are generated with yard. To run all three of these, run:
$ bundle exec rake
$ bundle exec rubocop
$ bundle exec rake yard
The specs pulled from the CFN docs is saved to CloudFormationResourceSpecification.json
. You can update it by running bundle exec rake specs
. This script will pull down the latest resource specification to be used with Humidifier.
Bug reports and pull requests are welcome on GitHub at https://github.com/kddnewton/humidifier.
The gem is available as open source under the terms of the MIT License.
Author: kddnewton
Source code: https://github.com/kddnewton/humidifier
License: MIT license
1597848060
rameworks and libraries can be said as the fundamental building blocks when developers build software or applications. These tools help in opting out the repetitive tasks as well as reduce the amount of code that the developers need to write for a particular software.
Recently, the Stack Overflow Developer Survey 2020 surveyed nearly 65,000 developers, where they voted their go-to tools and libraries. Here, we list down the top 12 frameworks and libraries from the survey that are most used by developers around the globe in 2020.
(The libraries are listed according to their number of Stars in GitHub)
**GitHub Stars: **147k
Rank: 5
**About: **Originally developed by researchers of Google Brain team, TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art research in ML. It allows developers to easily build and deploy ML-powered applications.
Know more here.
**GitHub Stars: **98.3k
**Rank: **9
About: Created by Google, Flutter is a free and open-source software development kit (SDK) which enables fast user experiences for mobile, web and desktop from a single codebase. The SDK works with existing code and is used by developers and organisations around the world.
#opinions #developer tools #frameworks #java tools #libraries #most used tools by developers #python tools