Brook  Hudson

Brook Hudson

1659396000

Humidifier: A Ruby tool for Managing AWS CloudFormation Stacks

Humidifier 

Humidifier is a ruby tool for managing AWS CloudFormation stacks. You can use it to build and manage stacks programmatically or you can use it as a command line tool to manage stacks through configuration files.

Installation

Add this line to your application's Gemfile:

gem 'humidifier'

And then execute:

$ bundle

Or install it yourself as:

$ gem install humidifier

Getting started

Stacks are represented by the Humidifier::Stack class. You can set any of the top-level JSON attributes (such as name and description) through the initializer.

Resources are represented by an exact mapping from AWS resource names to Humidifier resources names (e.g. AWS::EC2::Instance becomes Humidifier::EC2::Instance). Resources have accessors for each JSON attribute. Each attribute can also be set through the initialize, update, and update_attribute methods.

Example usage

The below example will create a stack with two resources, a loader balancer and an auto scaling group. It then deploys the new stack and pauses execution until the stack is finished being created.

stack = Humidifier::Stack.new(name: 'Example-Stack')

stack.add(
  'LoaderBalancer',
  Humidifier::ElasticLoadBalancing::LoadBalancer.new(
    scheme: 'internal',
    listeners: [
      {
        load_balancer_port: 80,
        protocol: 'http',
        instance_port: 80,
        instance_protocol: 'http'
      }
    ]
  )
)

stack.add(
  'AutoScalingGroup',
  Humidifier::AutoScaling::AutoScalingGroup.new(
    min_size: '1',
    max_size: '20',
    availability_zones: ['us-east-1a'],
    load_balancer_names: [Humidifier.ref('LoadBalancer')]
  )
)

stack.deploy_and_wait

Interfacing with AWS

Once stacks have the appropriate resources, you can query AWS to handle all stack CRUD operations. The operations themselves are intuitively named (i.e. #create, #update, #delete). There are also convenience methods for validating a stack body (#valid?), checking the existence of a stack (#exists?), and creating or updating based on existence (#deploy).

There are additionally four functions on Humidifier::Stack that support waiting for execution in AWS to finish. They all have non-blocking corollaries, and are named after them. They are: #create_and_wait, #update_and_wait, #delete_and_wait, and #deploy_and_wait.

CloudFormation functions

You can use CFN intrinsic functions and references using Humidifier.fn.[name] and Humidifier.ref. They will build appropriate structures that know how to be dumped to CFN syntax.

Change Sets

Instead of immediately pushing your changes to CloudFormation, Humidifier also supports change sets. Change sets are a powerful feature that allow you to see the changes that will be made before you make them. To read more about change sets see the announcement article. To use them in Humidifier, Humidifier::Stack has the #create_change_set and #deploy_change_set methods. The #create_change_set method will create a change set on the stack. The #deploy_change_set method will create a change set if the stack currently exists, and otherwise will create the stack.

Introspection

To see the template body, you can check the #to_cf method on stacks, resources, fns, and refs. All of them will output a hash of what will be uploaded (except the stack, which will output a string representation).

Humidifier itself contains a registry of all possible resources that it supports. You can access it with Humidifier::registry which is a hash of AWS resource name pointing to the class.

Resources have an ::aws_name method to see how AWS references them. They also contain a ::props method that contains a hash of the name that Humidifier uses to reference the prop pointing to the appropriate prop object.

Large templates

When templates are especially large (larger than 51,200 bytes), they cannot be uploaded directly through the AWS SDK. You can configure Humidifier to seamlessly upload the templates to S3 and reference them using an S3 URL instead by:

Humidifier.configure do |config|
  config.s3_bucket = 'my.s3.bucket'
  config.s3_prefix = 'my-prefix/' # optional
end

Forcing uploading

You can force a stack to upload its template to S3 regardless of the size of the template. This is a useful option if you're going to be deploying multiple copies of a template or if you want a backup. You can set this option on a per-stack basis:

stack.deploy(force_upload: true)

or globally, by setting the configuration option:

Humidifier.configure do |config|
  config.force_upload = true
end

CLI

Humidifier can also be used as a CLI for managing resources through configuration files. For a step-by-step guide, read on, but if you'd like to see a working example, check out the example directory.

To get started, build a ruby script (for example humidifier) that executes the Humidifier::CLI class, like so:

#!/usr/bin/env ruby
require 'humidifier'

Humidifier.configure do |config|
  # optional, defaults to the current working directory, so that all of the
  # directories from the location that you run the CLI are assumed to contain
  # resource specifications
  config.stack_path = 'stacks'

  # optional, a default prefix to use before deploying to AWS
  config.stack_prefix = 'humidifier-'

  # specifies that `users.yml` files contain specifications for `AWS::IAM::User`
  # resources
  config.map :users, to: 'IAM::User'
end

Humidifier::CLI.start(ARGV)

Resource files

Inside of the stacks directory configured above, create a subdirectory for each CloudFormation stack that you want to deploy. With the above configuration, we can create YAML files in the form of users.yml for each stack, which will specify IAM users to create. The file format looks like the below:

EngUser:
  path: /humidifier/
  user_name: EngUser
  groups:
  - Engineering
  - Testing
  - Deployment

AdminUser:
  path: /humidifier/
  user_name: AdminUser
  groups:
  - Management
  - Administration

The top-level keys are the logical resource names that will be displayed in the CloudFormation screen. They point to a map of key/value pairs that will be passed on to humidifier. Any humidifier (and therefore any CloudFormation) attribute may be specified. For more information on CloudFormation templates and which attributes may be specified, see both the humidifier docs and the CloudFormation docs.

Mappers

Oftentimes, specifying these attributes can become repetitive, e.g., each user should automatically receive the same "path" attribute. Other times, you may want custom logic to execute depending on which AWS environment you're running in. Finally, you may want to reference resources in the same or other stacks.

Humidifier's solution for this is to allow customized "mapper" classes to take the user-provided attributes and transform them into the attributes that CloudFormation expects. Consider the following example for mapping a user:

class UserMapper < Humidifier::Config::Mapper
  GROUPS = {
    'eng' => %w[Engineering Testing Deployment],
    'admin' => %w[Management Administration]
  }

  defaults do |logical_name|
    { path: '/humidifier/', user_name: logical_name }
  end

  attribute :group do |group|
    groups = GROUPS[group]
    groups.any? ? { groups: GROUPS[group] } : {}
  end
end

Humidifier.configure do |config|
  config.map :users, to: 'IAM::User', using: UserMapper
end

This means that by default, all entries in the users.yml files will get a /humidifier/ path, the user_name attribute will be set based on the logical name that was provided for the resource, and you can additionally specify a group attribute, even though it is not native to CloudFormation. With this group attribute, it will actually map to the groups attribute that CloudFormation expects.

With this new mapper in place, we can simplify our YAML file to:

EngUser:
  group: eng

AdminUser:
  group: admin

Using the CLI

Now that you've configured your CLI, your resources, and your mappers, you can use the CLI to display, validate, and deploy your infrastructure to CloudFormation. Run your script without any arguments to get the help message and explanations for each command.

Each command has an --aws-profile (or -p) option for specifying which profile to authenticate against when querying AWS. You should ensure that this profile has the correct permissions for creating whatever resources are going to part of your stack. You can also rely on the AWS_* environment variables, or the EC2 instance profile if you're deploying from an instance. For more information, see the AWS docs under the "Configuration" section.

Below are the list of commands and some of their options.

change [?stack]

Creates a change set for either the specified stack or all stacks in the repo. The change set represents the changes between what is currently deployed versus the resources represented by the configuration.

deploy [?stack] [*parameters]

Creates or updates (depending on if the stack already exists) one or all stacks in the repo.

The deploy command also allows a --prefix command line argument that will override the default prefix (if one is configured) for the stack that is being deployed. This is especially useful when you're deploying multiple copies of the same stack (for instance, multiple autoscaling groups) that have different purposes or semantically mean newer versions of resources.

display [stack] [?pattern]

Displays the specified stack in JSON format on the command line. If you optionally pass a pattern argument, it will filter the resources down to just ones whose names match the given pattern.

stacks

Displays the names of all of the stacks that humidifier is managing.

upgrade

Downloads the latest CloudFormation resource specification. Periodically AWS will update the file that humidifier is based on, in which case the attributes of the resources that were changed could change. This gem usually stays relatively in sync, but if you need to use the latest specs and this gem has not yet released a new version containing them, then you can run this command to download the latest specs onto your system.

upload [?stack]

Upload one or all stacks in the repo to S3 for reference later. Note that this must be combined with the humidifier s3_bucket configuration option.

validate [?stack]

Validate that one or all stacks in the repo are properly configured and using values that CloudFormation understands.

version

Output the version of Humidifier as well as the version of the CloudFormation resource specification that you are using.

Parameters

CloudFormation template parameters can be specified by having a special parameters.yml file in your stack directory. This file should contain a YAML-encoded object whose keys are the names of the parameters and whose values are the parameter configuration (using the same underscore paradigm as humidifier resources for specifying configuration).

You can pass values to the CLI deploy command after the stack name on the command line as in:

humidifier deploy foobar Param1=Foo Param2=Bar

Those parameters will get passed in as values when the stack is deployed.

Shortcuts

A couple of convenient shortcuts are built into humidifier so that writing templates and mappers both can be more concise.

Automatic id properties

There are a lot of properties in the AWS CloudFormation resource specification that are simply pointers to other entities within the AWS ecosystem. For example, an AWS::EC2::VPCGatewayAttachment entity has a VpcId property that represents the ID of the associated AWS::EC2::VPC.

Because this pattern is so common, humidifier detects all properties ending in Id and allows you to specify them without the suffix. If you choose to use this format, humidifier will automatically turn that value into a CloudFormation resource reference.

Anonymous mappers

A lot of the time, mappers that you create will not be overly complicated, especially if you're using automatic id properties. So, the config.map method optionally takes a block, and allows you to specify the mapper inline. This is recommended for mappers that aren't too complicated as to warrant their own class (for instance, for testing purposes). An example of this using the UserMapper from above is below:

Humidifier.configure do |config|
  config.map :users, to: 'IAM::User' do
    GROUPS = {
      'eng' => %w[Engineering Testing Deployment],
      'admin' => %w[Management Administration]
    }

    defaults do |logical_name|
      { path: '/humidifier/', user_name: logical_name }
    end

    attribute :group do |group|
      groups = GROUPS[group]
      groups.any? ? { groups: GROUPS[group] } : {}
    end
  end
end

Cross-stack references

AWS allows cross-stack references through the intrinsic Fn::ImportValue function. You can take advantage of this with humidifier by using the export: true option on resources in your stacks. For instance, if in one stack you have a subnet that you need to reference in another, you could (stacks/vpc/subnets.yml):

ProductionPrivateSubnet2a:
  vpc: ProductionVPC
  cidr_block: 10.0.0.0/19
  availability_zone: us-west-2a
  export: true

ProductionPrivateSubnet2b:
  vpc: ProductionVPC
  cidr_block: 10.0.64.0/19
  availability_zone: us-west-2b
  export: true

ProductionPrivateSubnet2c:
  vpc: ProductionVPC
  cidr_block: 10.0.128.0/19
  availability_zone: us-west-2c
  export: true

And then in another stack, you could reference those values (stacks/rds/db_subnets_groups.yml):

ProductionDBSubnetGroup:
  db_subnet_group_description: Production DB private subnet group
  subnets:
  - ProductionPrivateSubnet2a
  - ProductionPrivateSubnet2b
  - ProductionPrivateSubnet2c

Within the configuration, you would specify to use the Fn::ImportValue function like so:

Humidifier.configure do |config|
  config.stack_path = 'stacks'

  config.map :subnets, to: 'EC2::Subnet'

  config.map :db_subnet_groups, to: 'RDS::DBSubnetGroup' do
    attribute :subnets do |subnet_names|
      subnet_ids =
        subnet_names.map do |subnet_name|
          Humidifier.fn.import_value(subnet_name)
        end

      { subnet_ids: subnet_ids }
    end
  end
end

If you specify export: true it will by default export a reference to the resource listed in the stack. You can also choose to export a different attribute by specifying the attribute as the value to export. For example, if we were creating instance profiles and wanted to export the Arn so that it could be referenced by an instance later, we could:

APIRoleInstanceProfile:
  depends_on: APIRole
  roles:
  - APIRole
  export: Arn

Development

To get started, ensure you have ruby installed, version 2.4 or later. From there, install the bundler gem: gem install bundler and then bundle install in the root of the repository.

Testing

The default rake task runs the tests. Styling is governed by rubocop. The docs are generated with yard. To run all three of these, run:

$ bundle exec rake
$ bundle exec rubocop
$ bundle exec rake yard

Specs

The specs pulled from the CFN docs is saved to CloudFormationResourceSpecification.json. You can update it by running bundle exec rake specs. This script will pull down the latest resource specification to be used with Humidifier.

Contributing

Bug reports and pull requests are welcome on GitHub at https://github.com/kddnewton/humidifier.

License

The gem is available as open source under the terms of the MIT License.


Author: kddnewton
Source code: https://github.com/kddnewton/humidifier
License: MIT license

#ruby  #ruby-on-rails 

Humidifier: A Ruby tool for Managing AWS CloudFormation Stacks

Cloudflare Deploy Plugin for CakePHP

Cloudflare Deploy Plugin for CakePHP 

This plugin provides userful commands when deploying with Cloudflare.

This branch is for use with CakePHP 3.8+. For details see version map

Installation and Usage

CakePHP Cloudflare Deploy Plugin Documentation

Version Map

See Version Map

Installation

You can install this plugin into your CakePHP application using composer.

$ composer require challgren/cakephp-cloudflare-deploy

Load the plugin in your src/Application.php's boostrap() using:

$ bin/cake plugin load CloudflareDeploy

Configuration

Set your Cloudflare API User, API key and zoneId in Cloudflare settings in app.php.

You can get your zone ID by viewing the Overview of the domain controlled by Cloudflare.

return [
	'Cloudflare' => [
		'apiUser' => 'API Username',
		'apiKey' => 'API Key',
		'zoneId' => 'Zone ID'
	]
];

Usage

Run bin/cake clear_cloudflare this will enable Development on Cloudflare and purge the entire cache.

Contributing

You can fork the project, add features, and send pull requests or open issues.

Reporting Issues

If you are facing a problem with this plugin or found any bug, please open an issue on GitHub.

Author: challgren 
Source Code: https://github.com/challgren/cakephp-cloudflare-deploy 
License: MIT license

#php #cakephp #cloudflare #deploy 

Cloudflare Deploy Plugin for CakePHP
Hermann  Frami

Hermann Frami

1656577200

Serverless S3 Deploy

serverless-s3-deploy

Plugin for serverless to deploy files to a variety of S3 Buckets

Note: This project is currently not maintained.

Installation

 npm install --save-dev serverless-s3-deploy

Usage

Add to your serverless.yml:

  plugins:
    - serverless-s3-deploy

  custom:
    assets:
      targets:
       - bucket: my-bucket
         files:
          - source: ../assets/
            globs: '**/*.css'
          - source: ../app/
            globs:
              - '**/*.js'
              - '**/*.map'
       - bucket: my-other-bucket
         empty: true
         prefix: subdir
         files:
          - source: ../email-templates/
            globs: '**/*.html'

You can specify any number of targets that you want. Each target has a bucket and a prefix.

bucket is either the name of your S3 bucket or a reference to a CloudFormation resources created in the same serverless configuration file. See below for additional details.

You can specify source relative to the current directory.

Each source has its own list of globs, which can be either a single glob, or a list of globs.

Setting empty to true will delete all files inside the bucket before uploading the new content to S3 bucket. The prefix value is respected and files outside will not be deleted.

Now you can upload all of these assets to your bucket by running:

$ sls s3deploy

If you have defined multiple buckets, you can limit your deployment to a single bucket with the --bucket option:

$ sls s3deploy --bucket my-bucket

ACL

You can optionally specificy an ACL for the files uploaded on a per target basis:

  custom:
    assets:
      targets:
        - bucket: my-bucket
          acl: private
          files:

The default value is private. Options are defined here.

Content Type

The appropriate Content Type for each file will attempt to be determined using mime-types. If one can't be determined, a default fallback of 'application/octet-stream' will be used.

You can override this fallback per-source by setting defaultContentType.

  custom:
    assets:
      targets:
        - bucket: my-bucket
          files:
            - source: html/
              defaultContentType: text/html
              ...

Other Headers

Additional headers can be included per target by providing a headers object.

See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html for more details.

  custom:
    assets:
      targets:
        - bucket: my-bucket
          files:
            - source: html/
              headers:
                CacheControl: max-age=31104000 # 1 year

Resolving References

A common use case is to create the S3 buckets in the resources section of your serverless configuration and then reference it in your S3 plugin settings:

  custom:
    assets:
      targets:
        - bucket:
            Ref: MyBucket
          files:
            - source: html/

  resources:
    # AWS CloudFormation Template
    Resources:
      MyBucket:
        Type: AWS::S3::Bucket
        Properties:
          AccessControl: PublicRead
          WebsiteConfiguration:
            IndexDocument: index.html
            ErrorDocument: index.html

You can disable the resolving with the following flag:

  custom:
    assets:
      resolveReferences: false

Auto-deploy

If you want s3deploy to run automatically after a deploy, set the auto flag:

  custom:
    assets:
      auto: true

IAM Configuration

You're going to need an IAM policy that supports this deployment. This might be a good starting point:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::${bucket}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::${bucket}/*"
            ]
        }
    ]
}

Upload concurrency

If you want to tweak the upload concurrency, change uploadConcurrency config:

config:
  assets:
    # defaults to 3
    uploadConcurrency: 1

Verbosity

Verbosity cloud be enabled using either of these methods:

Configuration:

  custom:
    assets:
      verbose: true

Cli:

  sls s3deploy -v

Author: Funkybob
Source Code: https://github.com/funkybob/serverless-s3-deploy 
License: MIT license

#serverless #deploy #s3 #plugin 

Serverless S3 Deploy
Waylon  Bruen

Waylon Bruen

1654819980

Ko: Build and Deploy Go Applications on Kubernetes

ko: Easy Go Containers  

ko is a simple, fast container image builder for Go applications.

It's ideal for use cases where your image contains a single Go application without any/many dependencies on the OS base image (e.g., no cgo, no OS package dependencies).

ko builds images by effectively executing go build on your local machine, and as such doesn't require docker to be installed. This can make it a good fit for lightweight CI/CD use cases.

ko also includes support for simple YAML templating which makes it a powerful tool for Kubernetes applications (See below).

Setup

Install

Install from Releases

VERSION=TODO # choose the latest version
OS=Linux     # or Darwin
ARCH=x86_64  # or arm64, i386, s390x
curl -L https://github.com/google/ko/releases/download/v${VERSION}/ko_${VERSION}_${OS}_${ARCH}.tar.gz | tar xzf - ko
chmod +x ./ko

Install using Homebrew

brew install ko

Build and Install from Source

With Go 1.16+, build and install the latest released version:

go install github.com/google/ko@latest

Setup on GitHub Actions

You can use the setup-ko action to install ko and setup auth to GitHub Container Registry in a GitHub Action workflow:

steps:
- uses: imjasonh/setup-ko@v0.4

Authenticate

ko depends on the authentication configured in your Docker config (typically ~/.docker/config.json). If you can push an image with docker push, you are already authenticated for ko.

Since ko doesn't require docker, ko login also provides a surface for logging in to a container image registry with a username and password, similar to docker login.

Additionally, if auth is not configured in the Docker config, ko includes built-in support for authenticating to the following container registries using credentials configured in the environment:

Choose Destination

ko depends on an environment variable, KO_DOCKER_REPO, to identify where it should push images that it builds. Typically this will be a remote registry, e.g.:

  • KO_DOCKER_REPO=gcr.io/my-project, or
  • KO_DOCKER_REPO=my-dockerhub-user

Build an Image

ko build ./cmd/app builds and pushes a container image, and prints the resulting image digest to stdout.

In this example, ./cmd/app must be a package main that defines func main().

ko build ./cmd/app
...
gcr.io/my-project/app-099ba5bcefdead87f92606265fb99ac0@sha256:6e398316742b7aa4a93161dce4a23bc5c545700b862b43347b941000b112ec3e

NB: Prior to v0.10, the command was called ko publish -- this is equivalent to ko build, and both commands will work and do the same thing.

The executable binary that was built from ./cmd/app is available in the image at /ko-app/app -- the binary name matches the base import path name -- and that binary is the image's entrypoint.

Because the output of ko build is an image reference, you can easily pass it to other tools that expect to take an image reference.

To run the container locally:

docker run -p 8080:8080 $(ko build ./cmd/app)

Or to deploy it to other services like Cloud Run:

gcloud run deploy --image=$(ko build ./cmd/app)

Or fly.io:

flyctl launch --image=$(ko build ./cmd/app)
  • Note: The image must be publicly available.

Or AWS Lambda:

aws lambda update-function-code \
  --function-name=my-function-name \
  --image-uri=$(ko build ./cmd/app)
  • Note: The image must be pushed to ECR, based on the AWS provided base image, and use the aws-lambda-go framework. See official docs for more information.

Or Azure Container Apps:

az containerapp update \
  --name my-container-app
  --resource-group my-resource-group
  --image $(ko build ./cmd/app)
  • Note: The image must be pushed to ACR or other registry service. See official docs for more information.

Configuration

Aside from KO_DOCKER_REPO, you can configure ko's behavior using a .ko.yaml file. The location of this file can be overridden with KO_CONFIG_PATH.

Overriding Base Images

By default, ko bases images on gcr.io/distroless/static:nonroot. This is a small image that provides the bare necessities to run your Go binary.

You can override this base image in two ways:

  1. To override the base image for all images ko builds, add this line to your .ko.yaml file:
defaultBaseImage: registry.example.com/base/image
  1. To override the base image for certain importpaths:
baseImageOverrides:
  github.com/my-user/my-repo/cmd/app: registry.example.com/base/for/app
  github.com/my-user/my-repo/cmd/foo: registry.example.com/base/for/foo

Overriding Go build settings

By default, ko builds the binary with no additional build flags other than -trimpath. You can replace the default build arguments by providing build flags and ldflags using a GoReleaser influenced builds configuration section in your .ko.yaml.

builds:
- id: foo
  dir: .  # default is .
  main: ./foobar/foo
  env:
  - GOPRIVATE=git.internal.example.com,source.developers.google.com
  flags:
  - -tags
  - netgo
  ldflags:
  - -s -w
  - -extldflags "-static"
  - -X main.version={{.Env.VERSION}}
- id: bar
  dir: ./bar
  main: .  # default is .
  env:
  - GOCACHE=/workspace/.gocache
  ldflags:
  - -s
  - -w

If your repository contains multiple modules (multiple go.mod files in different directories), use the dir field to specify the directory where ko should run go build.

ko picks the entry from builds based on the import path you request. The import path is matched against the result of joining dir and main.

The paths specified in dir and main are relative to the working directory of the ko process.

The ldflags default value is [].

Please note: Even though the configuration section is similar to the GoReleaser builds section, only the env, flags and ldflags fields are currently supported. Also, the templating support is currently limited to using environment variables only.

Naming Images

ko provides a few different strategies for naming the image it pushes, to workaround certain registry limitations and user preferences:

Given KO_DOCKER_REPO=registry.example.com/repo, by default, ko build ./cmd/app will produce an image named like registry.example.com/repo/app-<md5>, which includes the MD5 hash of the full import path, to avoid collisions.

  • --preserve-import-path (-P) will include the entire importpath: registry.example.com/repo/github.com/my-user/my-repo/cmd/app
  • --base-import-paths (-B) will omit the MD5 portion: registry.example.com/repo/app
  • --bare will only include the KO_DOCKER_REPO: registry.example.com/repo

Local Publishing Options

ko is normally used to publish images to container image registries, identified by KO_DOCKER_REPO.

ko can also load images to a local Docker daemon, if available, by setting KO_DOCKER_REPO=ko.local, or by passing the --local (-L) flag.

Local images can be used as a base image for other ko images:

defaultBaseImage: ko.local/example/base/image

ko can also load images into a local KinD cluster, if available, by setting KO_DOCKER_REPO=kind.local. By default this loads into the default KinD cluster name (kind). To load into another KinD cluster, set KIND_CLUSTER_NAME=my-other-cluster.

Multi-Platform Images

Because Go supports cross-compilation to other CPU architectures and operating systems, ko excels at producing multi-platform images.

To build and push an image for all platforms supported by the configured base image, simply add --platform=all. This will instruct ko to look up all the supported platforms in the base image, execute GOOS=<os> GOARCH=<arch> GOARM=<variant> go build for each platform, and produce a manifest list containing an image for each platform.

You can also select specific platforms, for example, --platform=linux/amd64,linux/arm64

Generating SBOMs

A Software Bill of Materials (SBOM) is a list of software components that a software artifact depends on. Having a list of dependencies can be helpful in determining whether any vulnerable components were used to build the software artifact.

From v0.9+, ko generates and uploads an SBOM for every image it produces by default.

ko will generate an SBOM in the SPDX format by default, but you can select the CycloneDX format instead with the --sbom=cyclonedx flag. To disable SBOM generation, pass --sbom=none.

These SBOMs can be downloaded using the cosign download sbom command.

Static Assets

ko can also bundle static assets into the images it produces.

By convention, any contents of a directory named <importpath>/kodata/ will be bundled into the image, and the path where it's available in the image will be identified by the environment variable KO_DATA_PATH.

As an example, you can bundle and serve static contents in your image:

cmd/
  app/
    main.go
    kodata/
      favicon.ico
      index.html

Then, in your main.go:

func main() {
    http.Handle("/", http.FileServer(http.Dir(os.Getenv("KO_DATA_PATH"))))
    log.Fatal(http.ListenAndServe(":8080", nil))
}

You can simulate ko's behavior outside of the container image by setting the KO_DATA_PATH environment variable yourself:

KO_DATA_PATH=cmd/app/kodata/ go run ./cmd/app

Tip: Symlinks in kodata are followed and included as well. For example, you can include Git commit information in your image with:

ln -s -r .git/HEAD ./cmd/app/kodata/

Also note that http.FileServer will not serve the Last-Modified header (or validate If-Modified-Since request headers) because ko does not embed timestamps by default.

This can be supported by manually setting the KO_DATA_DATE_EPOCH environment variable during build (See below).

Kubernetes Integration

You could stop at just building and pushing images.

But, because building images is so easy with ko, and because building with ko only requires a string importpath to identify the image, we can integrate this with YAML generation to make Kubernetes use cases much simpler.

YAML Changes

Traditionally, you might have a Kubernetes deployment, defined in a YAML file, that runs an image:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 3
  ...
  template:
    spec:
      containers:
      - name: my-app
        image: registry.example.com/my-app:v1.2.3

...which you apply to your cluster with kubectl apply:

kubectl apply -f deployment.yaml

With ko, you can instead reference your Go binary by its importpath, prefixed with ko://:

    ...
    spec:
      containers:
      - name: my-app
        image: ko://github.com/my-user/my-repo/cmd/app

ko resolve

With this small change, running ko resolve -f deployment.yaml will instruct ko to:

  1. scan the YAML file(s) for values with the ko:// prefix,
  2. for each unique ko://-prefixed string, execute ko build <importpath> to build and push an image,
  3. replace ko://-prefixed string(s) in the input YAML with the fully-specified image reference of the built image(s), for example:
spec:
  containers:
    - name: my-app
      image: registry.example.com/github.com/my-user/my-repo/cmd/app@sha256:deadb33f...
  1. Print the resulting resolved YAML to stdout.

The result can be redirected to a file, to distribute to others:

ko resolve -f config/ > release.yaml

Taken together, ko resolve aims to make packaging, pushing, and referencing container images an invisible implementation detail of your Kubernetes deployment, and let you focus on writing code in Go.

ko apply

To apply the resulting resolved YAML config, you can redirect the output of ko resolve to kubectl apply:

ko resolve -f config/ | kubectl apply -f -

Since this is a relatively common use case, the same functionality is available using ko apply:

ko apply -f config/

NB: This requires that kubectl is available.

ko delete

To teardown resources applied using ko apply, you can run ko delete:

ko delete -f config/

This is purely a convenient alias for kubectl delete, and doesn't perform any builds, or delete any previously built images.

Frequently Asked Questions

How can I set ldflags?

Using -ldflags is a common way to embed version info in go binaries (In fact, we do this for ko!). Unfortunately, because ko wraps go build, it's not possible to use this flag directly; however, you can use the GOFLAGS environment variable instead:

GOFLAGS="-ldflags=-X=main.version=1.2.3" ko build .

How can I set multiple ldflags?

Currently, there is a limitation that does not allow to set multiple arguments in ldflags using GOFLAGS. Using -ldflags multiple times also does not work. In this use case, it works best to use the builds section in the .ko.yaml file.

Why are my images all created in 1970?

In order to support reproducible builds, ko doesn't embed timestamps in the images it produces by default.

However, ko does respect the SOURCE_DATE_EPOCH environment variable, which will set the container image's timestamp accordingly.

Similarly, the KO_DATA_DATE_EPOCH environment variable can be used to set the modtime timestamp of the files in KO_DATA_PATH.

For example, you can set the container image's timestamp to the current timestamp by executing:

export SOURCE_DATE_EPOCH=$(date +%s)

or set the timestamp of the files in KO_DATA_PATH to the latest git commit's timestamp with:

export KO_DATA_DATE_EPOCH=$(git log -1 --format='%ct')

Can I build Windows containers?

Yes, but support for Windows containers is new, experimental, and tenuous. Be prepared to file bugs. 🐛

The default base image does not provide a Windows image. You can try out building a Windows container image by setting the base image to a Windows base image and building with --platform=windows/amd64 or --platform=all:

For example, to build a Windows container image for ko, from within this repo:

ko build ./ --platform=windows/amd64

This works because the ko image is configured in .ko.yaml to be based on a golang base image, which provides platform-specific images for both Linux and Windows.

Known issues 🐛

  • Symlinks in kodata are ignored when building Windows images; only regular files and directories will be included in the Windows image.

Can I optimize images for eStargz support?

Yes! Set the environment variable GGCR_EXPERIMENT_ESTARGZ=1 to produce eStargz-optimized images.

Does ko support autocompletion?

Yes! ko completion generates a Bash/Zsh/Fish/PowerShell completion script. You can get how to load it from help document.

ko completion [bash|zsh|fish|powershell] --help

Or, you can source it directly:

source <(ko completion)

Does ko work with Kustomize?

Yes! ko resolve -f - will read and process input from stdin, so you can have ko easily process the output of the kustomize command.

kustomize build config | ko resolve -f -

Does ko integrate with other build and development tools?

Oh, you betcha. Here's a partial list:

Does ko work with OpenShift Internal Registry?

Yes! Follow these steps:

oc registry login --to=$HOME/.docker/config.json
  • Create a namespace where you will push your images, i.e: ko-images
  • Execute this command to set KO_DOCKER_REPO to publish images to the internal registry.
   export KO_DOCKER_REPO=$(oc registry info --public)/ko-images

Acknowledgements

This work is based heavily on experience from having built the Docker and Kubernetes support for Bazel. That work was presented here.

Discuss

Questions? Comments? Ideas? Come discuss ko with us in the #ko-project channel on the Kubernetes Slack! See you there!

Author: Google
Source Code: https://github.com/google/ko 
License: Apache-2.0 license

#go #golang #kubernetes #deploy #container 

Ko: Build and Deploy Go Applications on Kubernetes
Veronica  Roob

Veronica Roob

1653648540

Deployer: A Deployment Tool Written in PHP

Deployer

A deployment tool written in PHP with support for popular frameworks out of the box.

Deployer Screenshot
 

See deployer.org for more information and documentation.

Features

  • Automatic server provisioning.
  • Zero downtime deployments.
  • Ready to use recipes for most frameworks.

Additional resources

License

MIT

Author: deployphp
Source Code: https://github.com/deployphp/deployer
License: MIT License

#php #deploy 

Deployer: A Deployment Tool Written in PHP
Awesome  Rust

Awesome Rust

1649941440

Docker Rustup: Automated Builded Images for Rust-lang with Rustup

rustup

Automated builded images on store and hub for rust-lang with musl added, using rustup "the ultimate way to install RUST".

tag changed: all3 -> all

note:

  1. Image buildings are triggered by automated builds on cloud.docker.com when "build branch" is updated by build.sh
  2. Please check liuchong/rustup tags on store instead of Build Details on hub
  3. The "build branch" and "tags" are meaningless but just docker images(which are with stable/versions tags) for building
  4. the "version tags" are available from 1.15.0
  5. the stable/beta/nightly tags does not have the package "musl-tools" and the target "x86_64-unknown-linux-musl" installed by default

Usage

Images

pull the images:

> docker pull liuchong/rustup
> docker pull liuchong/rustup:musl

the tags are:

use the image

just setup the Dockerfile:

FROM liuchong/rustup:stable
...

or you maybe prefer to make a musl static building:

# you can also use "latest", which is the same as "musl".
docker run -v $PWD:/build_dir -w /build_dir -t liuchong/rustup:musl cargo build --release
# or, you may want to use nightly channel and fix the ownership and remove container after run as below:
docker run --rm -v $PWD:/build_dir -w /build_dir -t liuchong/rustup:musl sh -c "rustup run nightly cargo build --release && chown -R $(id -u):$(id -g) target"

then, you can write a dockerfile like this and build you app image(so, the image will be very small):

FROM scratch
ADD target/x86_64-unknown-linux-musl/release/your-app /
CMD ["/your-app"]
# or something like this:
# CMD ["/your-app", "--production"]

Build script

# Use automatical checked version from website for current stable builds:
./build.sh
# Use a specified stable version from command line:
./build.sh 1.21.0
# Do not build versioning tag, just pass a string which is not fit the version pattern,
# as the first argument:
./build.sh no-version
./build.sh foo

Download Details:
Author: liuchong
Source Code: https://github.com/liuchong/docker-rustup
License: MIT License

#rust  #rustlang  #deploy #docker 

Docker Rustup: Automated Builded Images for Rust-lang with Rustup
Greg  Jackson

Greg Jackson

1638546600

WORKSHOP: How to Create and Deploy your first Rust Web App

Creating web apps is one way you can use Rust. With frameworks like Rocket you can create fast and secure applications. Then, deploying your project is a task you can complete with any CI/CD tool that supports Rust and the platform you choose. This workshop will guide you through the whole process.

Creating web apps is one way you can use Rust. There are frameworks like Rocket that you can use for creating fast and secure applications. After that, deploying your project to a cloud platform like Heroku is a task you can complete with any CI/CD tool that supports Rust and the platform you choose. GitLab CI is a tool that works well with Rust projects.

This workshop will take you through the process of creating your first Rust web app and deploying to a platform like Heroku.

Outline: 
1. Basics of Rocket 
2. Directory structure 
3. Templates 
4. Routing 
5. Your first web app 
6. A new git repository 
7. Configuration 
8. Heroku 
9. GitLab CI 
10. Up and running

#rust #deploy

WORKSHOP: How to Create and Deploy your first Rust Web App
Hoang  Ha

Hoang Ha

1632101009

4 Cách Triển Khai Ứng Dụng Front-End Lên Máy Chủ (Free)

4 cách DEPLOY ứng dụng Frontend (2021)

Công viêc cuối cùng mà chúng ta luôn phải làm sau khi hoàn thành xong bất kì một ứng dụng web nào đó là Deployment. Chúng ta sẽ cần Deploy ứng dụng web lên một máy chủ nào đó để người dùng có thể truy cập vào ứng dụng thông qua một tên miền.

Chúng ta sẽ cùng nhau tìm hiểu về 4 cách để deploy một ứng dụng FE:
- Netlify
- Firebase 
- Github Pages
- Digital Ocean

#deploy #deployfe #holetex #githubpages #netlify #firebase #digitalocean

4 Cách Triển Khai Ứng Dụng Front-End Lên Máy Chủ (Free)

Deploy Node Application and PostgreSql Database to Heroku

Deploying full stack application is awesome thing you can do in full stack development. as deploying on Heroku it looks daunting at first,i put together all steps you need to follow to deploy a working full stack application. It is quick and straight thing to do.

Creating PostgreSql database and setting up express application is out of this blogs scope.

First thing you should do is setting environment variables in .env file , variables which are required for basic application are following.

jwtSecret=lucky11
PG_USER= postgres_user
PG_PASSWORD= password
PG_HOST= localhost
PG_PORT= 5432
PG_DATABASE= database_name
NODE_ENV=development

_Make sure that you __gitignore _the .env file as keeping these environment variables open and accessible is unsafe thing you can do today.

#nodejs #deploy #heroku #psql

Deploy Node Application and PostgreSql Database to Heroku
Jake  Luettgen

Jake Luettgen

1620370860

IaaS, PaaS, and SaaS: A Comparison

Introduction

Businesses looking to establish themselves online have numerous options. They can not only choose the providers, but the service they wish to receive. Cloud computing has become an extremely popular hosting method due to flexibility in price, features, and support/management. Within the concept of cloud computing, you are typically presented with three distinct categories of services offered:

  • Infrastructure-as-a-Service
  • Platform-as-a-Service
  • Software-as-a-Service

Each has its own set of benefits and drawbacks, use cases, and appeal depending on the project at hand. Let’s explore each of them and see what they are and how they differ.

IaaS (Infrastructure-as-a-Service)

What is it?

IaaS or Infrastructure-as-a-Service is a cloud computing service where the consumer receives the use of a virtual machine (VM). The IaaS provider specifies the amount of hardware performance/capacity to allocate to the VM. It also starts the VM, and boots it with the chosen operating system (OS). The client only ever accesses or has to be concerned with the Multi-Tenant VM; all the physical hardware is monitored and serviced by the provider (hence the infrastructure being the service provided).

#tutorials #administration #automation #ci/cd #cloud #code #containerization #containers #deploy #deployment #developer #development #devops #iaas #iac #infrastructure #infrastructure as code #kubernetes #linux #open source #organization #paas #platform #provisioning #remote management #saas #virtual machines #virtualization

IaaS, PaaS, and SaaS: A Comparison

How to Deploy Angular on Apache Remote Server Example - Use Vultr Hosting » grokonez

https://grokonez.com/frontend/angular/angular-deployment/how-to-deploy-angular-on-apache-remote-server-example-use-vultr-hosting

How to Deploy Angular on Apache Remote Server Example – Use Vultr Hosting

In the tutorial, We show how to deploy Angular Client with Production mode on Apache Remote Server with Vultr Hosting.

Related post:

Technologies

- Vultr Hosting - Apache Server - Angular

Goal

Video Guide

Objectives

Deploy Angular Client on Apache remote server: - Normal deployment -> deploy-angular-client-on-apache-server-with-vultr-hosting-result-2
  • Sub-folder deployment ->
deploy-angular-client-on-apache-server-with-vultr-hosting-sub-folder-deploy

How to achieve it?

Start with production build ng build --prod.
-> Then copy output folder (dist/ by default) to Apache server.

deploy-angular-client-on-apache-server-with-vultr-hosting-copy-production-build-from-local-to-remote

What are Production –prod optimizations?

More at:

https://grokonez.com/frontend/angular/angular-deployment/how-to-deploy-angular-on-apache-remote-server-example-use-vultr-hosting

How to Deploy Angular on Apache Remote Server Example – Use Vultr Hosting

#angular #apache #deploy

How to Deploy Angular on Apache Remote Server Example - Use Vultr Hosting » grokonez
Ferenc Almasi

Ferenc Almasi

1610111564

How to Connect CircleCI With Netlify

Netlify, the all-in-one platform to automate the deployment of modern applications, is a great tool to get our sites live the fastest we can.

The only problem is that we don’t really have the flexibility to run a complete test suite beforehand without much hassle. Yet, this is essential to keep our applications bug-free and prevent breaking our site accidentally.

Luckily, there is a solution. And we don’t have to leave Netlify. Instead, we will connect CircleCI to set up the testing process to our own taste. The rest will be handled by Netlify. Let’s set up a new project to start from scratch.


How to Connect CircleCI With Netlify

#web-development #frontend #deploy #ci #circleci #webtips

How to Connect CircleCI With Netlify
Kole  Haag

Kole Haag

1603976400

A High-Level Overview of Load Balancing Algorithms

Introduction

Load balancing is the process of evenly distributing your network load across several servers. It helps in scaling the demand during peak traffic hours by helping spread the work uniformly. The server can be present in a cloud or a data center or on-premises. It can be either a physical server or a virtual one. Some of the main functions of a load balancer (LB) are:

  • Routes data efficiently
  • Prevents server overloading
  • Performs health checks for the servers
  • Provisions new server instances in the face of large traffic

Types of Load Balancing Algorithms

In the seven-layer OSI model, load balancing occurs from layers 4 (transport layer) to 7 (application layer).

diagram of the layers in the OSI model with arrows indicating where the application load and network load balancers are used

The different types of LB algorithms are effective in distributing the network traffic based on how the distribution of traffic looks, i.e., whether it’s network layer traffic or application layer traffic.

  • The network layer traffic is routed by LB based on TCP port, IP addresses, etc.
  • The application layer traffic is routed based on various additional attributes like HTTP header, SSL, and it even provides content switching capabilities to LBs.

Network Layer Algorithms

1. Round-robin algorithm

The traffic load is distributed to the first available server, and then that server is pushed down into the queue. If the servers are identical and there are no persistent connections, this algorithm can prove effective. There are two major types of round-robin algorithms:

  • **Weighted round-robin: **If the servers are not of identical capacity, then this algorithm can be used to distribute load. Some weights or efficiency parameters can be assigned to all the servers in a pool and based on that, in a similar cyclic fashion, load is distributed.
  • **Dynamic round-robin: **The weights that are assigned to a server to identify its capacity can also be calculated on runtime. Dynamic round-robin helps in sending the requests to a server based on runtime weight.

2. Least-connections algorithm

This algorithm calculates the number of active connections per server during a certain time and directs the incoming traffic to the server with the least connections. This is super helpful in the scenarios where a persistent connection is required.

3. Weighted least-connections algorithm

This is similar to the least-connections algorithm above but apart from the number of active connections to a server, it also keeps in mind the server capacity.

4. Least-response-time algorithm

This is again similar to the least-connections algorithm, but it also considers the response time of servers. The request is sent to the server with the least response time.

5. Hashing algorithm

The different request parameters are used to determine where the request will be sent. The different types of algorithms based on this are:

  • Source/destination IP hash: The source and destination IP addresses are hashed together to determine the server that will serve the request. In case of a dropped connection, the same request can be redirected to the same server upon retry.
  • URL hash: The request URL is used for performing hashing, and this method helps in reducing duplication of server caches by avoiding storing the same request object in many caches.

6. Miscellaneous algorithms

There are a few other algorithms as well, which are as follows:

  • Least-bandwidth algorithm: The server with the least consumption of bandwidth in the last 14 minutes is selected by the load balancer.
  • Least-packets algorithm: Similar to above, the server that is transmitting the least number of packets is chosen by the load balancer to redirect traffic.
  • Custom-load algorithm: The load balancer selects the server based on the current load on it, which can be determined by memory, processing unit usage, response time, number of requests, etc.

#devops #backend #programming #deploy #startup

A High-Level Overview of Load Balancing Algorithms
Olen  Predovic

Olen Predovic

1603285200

Deploy Data Factory from GIT to Azure with ARM Template

Deploy Data Factory from GIT to Azure with ARM Template

You may have noticed the export feature on Azure resource groups don’t like too much the Data Factory. We can’t completely export a Data Factory as an ARM template, it fails.

Probably you know you don’t need to care about this too much. You can link the data factory with a github repo and get source code versioning control. The versioning control is, after all, even better than a simple export to an ARM template.

Even so, we will still need an ARM template to deploy this data factory to another azure environment when we would like so. The good news is this template is not only easy, but absolutely the same for any data factory we would like, because it just need to point to a github repo and all the data factory code will come from there.

The main part of the ARM template is the resource definition and the secret of the data factory resource definition is how to define a data factory already linked with github. Take a look:

{
  "resources": [
    {
      "type": "Microsoft.DataFactory/factories",
      "apiVersion": "[parameters('apiVersion')]",
      "name": "[parameters('name')]",
      "location": "[parameters('location')]",
      "identity": {
        "type": "SystemAssigned"
      },
      "properties": {
        "repoConfiguration": "[variables('repoConfiguration')]"
      }
    }
  ]
}

It’s a regular resource definition, except by the properties: That’s the secret. We set the property repoConfiguration to an array with all github configurations needed for the data factory.

As you may notice, this property is making reference to a variable. As a result, we need to set this variable before the resources element.

{
    "variables": {
        "repoConfiguration": {
            "type": "FactoryVSTSConfiguration",
            "accountName": "[parameters('gitAccountName')]",
            "repositoryName": "[parameters('gitRepositoryName')]",
            "collaborationBranch": "[parameters('gitBranchName')]",
            "rootFolder": "[parameters('gitRootFolder')]",
            "projectName": "[parameters('gitProjectName')]"
        }
    }
}

#blogs #uncategorized #arm #azure #datafactory #dataplatform #deploy

Deploy Data Factory from GIT to Azure with ARM Template