Lawrence  Lesch

Lawrence Lesch

1662060360

Deploy static websites and single page apps to AWS S3 and CloudFront

Scotty.js

As you probably noticed, Scotty.js is not active anymore. Working on Scotty.js was fun, but AWS released AWS Amplify in the meantime, and tools such as Scotty.js are no longer needed. Please take a look at the AWS Amplify Console. It's an excellent tool for hosting static websites and single-page applications.


Deploy static websites or folders to AWS S3 with a single command   

Install

Scotty.js is available on NPM. Install it as a global dependency to be able to use scotty command anywhere:

npm install scottyjs --global

Use

Beam me up, Scotty

scotty-intro.gif

To deploy a static folder to AWS S3 run:

scotty {options}

or

beam-me-up {options}

Available options

  • --help or -h - Print this help
  • --version or -v - Print the current version
  • --noclipboard or -n - Do not copy the URL to clipboard (default: false)
  • --quiet or -q - Suppress output when executing commands (default: false)
  • --website or -w - Set uploaded folder as a static website (default: false)
  • --spa - Set uploaded folder as a single page app (default: false)
  • --source or -s - Source of the folder that will be uploaded (default: current folder)
  • --bucket or -b - Name of the S3 bucket (default: name of the current folder)
  • --prefix or -p - Prefix on the S3 bucket (default: the root of the bucket)
  • --region or -r - AWS region where the files will be uploaded, default: saved region if exists or a list to choose one if it is not saved yet
  • --force or -f - Update the bucket without asking (default: false, forced region can be overridden with -r)
  • --update or -u - Update existing bucket (default: false)
  • --delete or -d - Delete existing bucket (default: false)
  • --nocdn or -c - Disable Cloudfront handling (default: false)
  • --urlonly or -o - Only output the resulting URL, CDN or S3 according to options (default: false)
  • --expire or -e - delete objects on bucket older than n days (default: no expiration)
  • --profile or -a - AWS profile to be used (default: 'default')
  • --empty or -y - Empty the bucket (Delete all objects before upload files) (default: false)

Examples

Create React App application

Full tutorial: http://medium.com/@slobodan/single-command-deployment-for-single-page-apps-29941d62ef97

To deploy CRA apps simply run npm run build in your project root folder to create build version.

Then deploy build version using following command:

scotty --spa --source ./build

Or, if you want to specify bucket name run:

scotty --spa --source ./build --bucket some-bucket-name

With --spa flag, Scotty will set required redirects for your single page app, so your app can use pushState out of the box.

Shared bucket application

To deploy multiple apps to a single bucket you can make use of the --prefix option. This comes in handy when your CI system deploys to a staging system with each branch as a pathname. Eg. the master branch should go to bucket root (/), so you do not set the prefix. The feature/fancy-stuff branch should go to the bucket path feature/fancy-stuff so just add this as the prefix. Here comes a command line example:

# deploy your master branch build to bucket root
scotty --source ./build --bucket some-bucket-name
# deploy your branch build to the branch name on the bucket
scotty --source_ ./build --bucket some-bucket-name --prefix your/branch

Test

We use Jasmine for unit and integration tests. Unless there is a very compelling reason to use something different, please continue using Jasmine for tests. The existing tests are in the spec folder. Here are some useful command shortcuts:

Run all the tests:

npm test

Run only some tests:

npm test -- filter=prefix

Get detailed hierarchical test name reporting:

npm test -- full

Download Details:

Author: Stojanovic
Source Code: https://github.com/stojanovic/scottyjs 
License: MIT license

#javascript #aws #deployment 

Deploy static websites and single page apps to AWS S3 and CloudFront
Rupert  Beatty

Rupert Beatty

1659482040

Deployer: Deployer Is A Free and Open Source Deployment tool

Deployer

Deployer is a PHP Application deployment system powered by Laravel 5.5, written & maintained by Stephen Ball.

Check out the releases, license, screenshots and contribution guidelines.

See the wiki for information on system requirements, installation & upgrade instructions and answers to common questions.

What it does

  • Deploys applications to multiple servers accessible via SSH
  • Clones your project's git repository
  • Installs composer dependencies
  • Runs arbitrary bash commands
  • Gracefully handles failure in any of these steps
  • Keeps a number of previous deployments
  • Monitors that cronjobs are running
  • Allows deployments to be triggered via a webhook

What it doesn't do

Author: REBELinBLUE
Source Code: https://github.com/REBELinBLUE/deployer 
License: MIT license

#laravel #php #deployment 

Deployer: Deployer Is A Free and Open Source Deployment tool
Hermann  Frami

Hermann Frami

1654345320

Serverless Fast Deploy Plugin: Lightening Fast Serverless Deployments

Serverless Fast Deploy Plugin  

Fast Serverless deployments for large packages

Requirements:

  • Serverless v1.12.x or higher.
  • AWS provider

How it works

I found that while working with Python libraries such Numpy and Pandas, my deploys became very slow and expensive (I work off a mobile data plan) due to the increased package size. This plugin deploys a specialized Lambda always you to only deploy the files that are most likely to change. It does this by merging the incoming files with the latest existing package on S3. So now when I deploy a change, I am sending a few KB across the wire each time, not 50 MB.

Caveats

A note about merging the update package with the base package

y first attempt was to just use the latest existing deployment package on S3, unpack that and create a new package with the update files. This was a bit "slow", so now I create a base package which is the full previous deployment package without the files described by the custom.fastDeploy.include property. This means that I can simply append the new files, resulting in an even faster deploy. The unfortunately side effect being that if you change the custom.fastDeploy.include property, you need to do a full deployment before doing your next FastDeploy.

The creation of the base deployment package also means that the first FastDeploy will be slightly slower than subsequent deployments.

Custom deployment bucket

At the moment this plugin bypasses all of the standard deployment lifecycle stages, so I am not yet able to get hold of the auto generated deployment bucket. As such this plugin only works if you have created a custom deployment bucket and configured it via the provider.deploymentBucket property.

IAM Role

The FastDeploy Lambda requires the following permissions on the deployment bucket. Either this can be added to the services default role, or you can create a new role and configure it via the custom.fastDeploy.role property.

Updates to CloudFormation configuration requires a full deployment

Much like Serverless's function deployment feature, any updates to the CloudFormation stack requires a full deployment.

- Effect: Allow
  Action:
    - s3:GetObject
    - s3:PutObject
  Resource: arn:aws:s3:::aronim-serverless/*
- Effect: Allow
  Action:
    - s3:ListBucket
  Resource: arn:aws:s3:::aronim-serverless     

Setup

Install via npm in the root of your Serverless service:

npm install serverless-plugin-fastdeploy --save-dev
  • Add the plugin to the plugins array in your Serverless serverless.yml:
plugins:
  - serverless-plugin-fastdeploy

Run

sls fastdeploy

Configuration

The custom.fastDeploy.include property describes which files to include in the update package, and exclude from the base package. This can be an array if you are just working in single module project, or an object if you are working with a multi-module project.

Available custom properties:


custom:
  fastDeploy:
    memorySize: 512    # Optional. Default: 512MB
    timeout: 30        # Optional. Default: 30sec
    include:           # Required. No Default
      - src/*.js       # Example
    role:              # Optional. Uses service default role if one is provided
      - FastDeployRole # Example
service: ServerlessFastDeployExample

plugins:
  - serverless-plugin-fastdeploy

provider:
  ...
  role: DefaultRole
  deploymentBucket: aronim-serverless

custom:
  fastDeploy:
    include:
      - package_one/**
      - package_two/**

######      
# OR #      
###### 
 
custom:
  fastDeploy:
    include:
      ".": service_one/**
      "../../modules/module-two": module_two/**     

resources:
  Resources:
    DefaultRole:
      Type: AWS::IAM::Role
      Properties:
        Path: /
        RoleName: ${self:service}-${self:provider.stage}
        AssumeRolePolicyDocument:
          Version: "2012-10-17"
          Statement:
            - Effect: Allow
              Principal:
                Service:
                  - lambda.amazonaws.com
              Action: sts:AssumeRole
        Policies:
          - PolicyName: ${self:service}-${self:provider.stage}
            PolicyDocument:
              Version: "2012-10-17"
              Statement:
                - Effect: Allow
                  Action:
                    - logs:CreateLogGroup
                    - logs:CreateLogStream
                    - logs:PutLogEvents
                  Resource: arn:aws:logs:${self:provider.region}:*:log-group:/aws/lambda/*:*:*
                - Effect: Allow
                  Action:
                    - s3:GetObject
                    - s3:PutObject
                  Resource: arn:aws:s3:::aronim-serverless/*
                - Effect: Allow
                  Action:
                    - s3:ListBucket
                  Resource: arn:aws:s3:::aronim-serverless     

Cost

Since we are deploying an additional Lambda, there are some neglible cost implications. The default memory allocated to the FastDeploy Lambda is 512MB, but this can be increased or decreased using the custom.fastDeploy.memory property.

Acknowledgements

A big thank you to FidelLimited, I blatently plagiarized their WarmUp plugin for the basis of the FastDeploy Lambda :-) As they say "Mimicry is the highest form of flattery".

Contribute

Help us making this plugin better and future proof.

  • Clone the code
  • Install the dependencies with npm install
  • Create a feature branch git checkout -b new_feature
  • Lint with standard npm run lint

Author: Aronim
Source Code: https://github.com/aronim/serverless-plugin-fastdeploy 
License: MIT license

#serverless #plugin #deployment 

Serverless Fast Deploy Plugin: Lightening Fast Serverless Deployments
Veronica  Roob

Veronica Roob

1653692520

Awesome PHP: Libraries for Project Deployment

Deployment

Libraries for project deployment.

  • Deployer - A deployment tool.
  • Envoy - A tool to run SSH tasks with PHP.
  • Rocketeer - A fast and easy deployer for the PHP world.

Author: ziadoz
Source Code: https://github.com/ziadoz/awesome-php
License: WTFPL License

#php #deployment 

Awesome PHP: Libraries for Project Deployment
Hermann  Frami

Hermann Frami

1653630060

Serverless Ding

serverless-ding

A serverless plugin that outputs a bell character to Terminal after sls deploy. Will only work if the audible bell in Terminal is turned on.

Why

Because running sls deploy takes just long enough to go do something else and forget that you ran sls deploy. Figured a notification would be nice.

Getting Started

Prerequisites

Make sure you have the following installed before starting:

Installing

From npm (recommended)

npm install serverless-ding --save-dev

Then make the following edits to your serverless.yaml file:

Add the plugin.

plugins:
  - serverless-ding

Author: Captainsidd
Source Code: https://github.com/captainsidd/serverless-ding 
License: MIT license

#serverless #aws #deployment 

Serverless Ding
Hermann  Frami

Hermann Frami

1653615240

Serverless Deployment Bucket

serverless-deployment-bucket  

Create and configure the custom Serverless deployment bucket.

Purpose

By default, Serverless creates a bucket with a generated name like <service name>-serverlessdeploymentbuck-1x6jug5lzfnl7 to store your service's stack state. This can lead to many old deployment buckets laying around in your AWS account and your service having more than one bucket created (only one bucket is actually used).

Serverless' AWS provider can be configured to customize aspects of the deployment bucket, such as specifying server-side encryption and a custom deployment bucket name. However, server-side encryption is only applied to the objects that Serverless puts into the bucket and is not applied on the bucket itself. Furthermore, if the bucket name you specify doesn't exist, you will encounter an error like:

Serverless Error ---------------------------------------

  Could not locate deployment bucket. Error: The specified bucket does not exist

This plugin will create your custom deployment bucket if it doesn't exist, and optionally configure the deployment bucket to apply server-side encryption. To support the AWS S3 API for encryption you can configure this plugin with the following:

For AES256 server side encryption support:

  deploymentBucket:
    name: your-custom-deployment-bucket
    serverSideEncryption: AES256

For aws:kms server side encryption support:

  deploymentBucket:
    name: your-custom-deployment-bucket
    serverSideEncryption: aws:kms
    kmsKeyID: your-kms-key-id
    
For bucket access logging support:

```yaml
  deploymentBucket:
    name: your-custom-deployment-bucket
    accessLog:
      bucket: "the-already-existing-bucket"
      prefix: "prefix-to-use-for-these-logs"

This plugin also provides the optional ability to enable versioning of bucket objects, however this is not enabled by default since Serverless tends to keep its own copies and versions of state.

Install

npm install serverless-deployment-bucket --save-dev

Configuration

Add the plugin to your serverless.yml:

plugins:
  - serverless-deployment-bucket

Configure the AWS provider to use a custom deployment bucket:

provider:
  deploymentBucket:
    name: your-custom-deployment-bucket
    serverSideEncryption: AES256

Optionally add custom configuration properties:

custom:
  deploymentBucket:
    versioning: true
    accelerate: true
    blockPublicAccess: true
    tags:
      - Key: Environment
        Value: production
PropertyRequiredTypeDefaultDescription
versioningfalsebooleanfalseEnable versioning on the deployment bucket
acceleratefalsebooleanfalseEnable acceleration on the deployment bucket
enabledfalsebooleantrueEnable this plugin
policyfalsestring Bucket policy as JSON
tagsfalsearray Bucket tags as an array of key:value objects
blockPublicAccessfalsebooleanfalseBlock all public access for the deployment bucket

Usage

Configuration of your serverless.yml is all you need.

There are no custom commands, just run: sls deploy

Author: Mikesouza
Source Code: https://github.com/mikesouza/serverless-deployment-bucket 
License: MIT license

#serverless #deployment 

Serverless Deployment Bucket
Awesome  Rust

Awesome Rust

1649970900

Heroku Buildpack for Rust

This is a Heroku buildpack for Rust with support for cargo and rustup. Features include:

  • Caching of builds between deployments.
  • Automatic updates to the latest stable Rust by default.
  • Optional pinning of Rust to a specific version.
  • Support for export so that other buildpacks can access the Rust toolchain.
  • Support for compiling Rust-based extensions for projects written in other languages.

Example projects

Here are several example projects:

Using this buildpack

To deploy an application to Heroku, we recommend installing the Heroku CLI.

If you're creating a new Heroku application, cd to the directory containing your code, and run:

heroku create --buildpack emk/rust

This will only work if your application has a Cargo.toml and uses git. If you want to set a particular name for application, see heroku create --help first.

To use this as the buildpack for an existing application, run:

heroku buildpacks:set emk/rust

You will also need to create a Procfile pointing to the release version of your application, and commit it to git:

web: ./target/release/hello

...where hello is the name of your binary.

To deploy your application, run:

git push heroku master

Running Diesel migrations during the release phase

This will install the diesel CLI at build time and make it available in your dyno. Migrations will run whenever a new version of your app is released. Add the following line to your RustConfig

RUST_INSTALL_DIESEL=1

and this one to your Procfile

release: ./target/release/diesel migration run

Specifying which version of Rust to use

By default, your application will be built using the latest stable Rust. Normally, this is pretty safe: New stable Rust releases have excellent backwards compatibility.

But you may wish to use nightly Rust or to lock your Rust version to a known-good configuration for more reproducible builds. To specify a specific version of the toolchain, use a rust-toolchain file in the format rustup uses.

Note: if you previously specified a VERSION variable in RustConfig, that will continue to work, and will override a rust-toolchain file.

Combining with other buildpacks

If you have a project which combines both Rust and another programming language, you can insert this buildpack before your existing one as follows:

heroku buildpacks:add --index 1 emk/rust

If you have a valid Cargo.toml in your project, this is all you need to do. The Rust buildpack will run first, and your existing buildpack will run second.

But if you only need Rust to build a particular Ruby gem, and you have no top-level Cargo.toml file, you'll need to let the buildpack know to skip the build stage. You can do this by adding the following line to RustConfig:

RUST_SKIP_BUILD=1

Customizing build flags

If you want to change the cargo build command, you can set the RUST_CARGO_BUILD_FLAGS variable inside the RustConfig file.

RUST_CARGO_BUILD_FLAGS="--release -p some_package --bin some_exe --bin some_bin_2"

The default value of RUST_CARGO_BUILD_FLAGS is --release. If the variable is not set in RustConfig, the default value will be used to build the project.

Using the edge version of the buildpack

The emk/rust buildpack from the Heroku Registry contains the latest stable version of the buildpack. If you'd like to use the latest buildpack code from this Github repository, you can set your buildpack to the Github URL:

heroku buildpacks:set https://github.com/emk/heroku-buildpack-rust

Development notes

If you need to tweak this buildpack, the following information may help.

Testing with Docker

To test changes to the buildpack using the included docker-compose-test.yml, run:

./test_buildpack

Then make sure there are no Rust-related *.so files getting linked:

ldd heroku-rust-cargo-hello/target/release/hello

This uses the Docker image heroku/cedar, which allows us to test in an official Cedar-like environment.

We also run this test automatically on Travis CI.

Download Details:
Author: emk
Source Code: https://github.com/emk/heroku-buildpack-rust
License:

#rust  #rustlang  #deployment #heroku #buildpack 

Heroku Buildpack for Rust
Awesome  Rust

Awesome Rust

1649963520

Wasm Template for Rust Hosting without Npm-deploy

Wasm template for Rust hosting without npm-deploy on github pages using Travis script

It automatically hosts you wasm projects on gh-pages using a travis script on the latest commit.

Requirements

Steps :

For building :

  "dependencies": {
    "wasm-template-rust": "file:../pkg"
  },

Into :

  "dependencies": {
    "YOUR-PROJECT-NAME-SAME-AS-IN-CARGO.toml": "file:../pkg"
  • Run wasm-pack build inside your project dictionary
  • Run npm install inside www folder
  • Next, modify www/index.js to import your PROJECT instead of the wasm-template-rust package
  • Again run npm install inside www folder (just to be sure)
  • Finally run npm run start inside www and visit http://localhost:8080 to see the results

For deployment :

The template comes with a preconfigured .travis.yml but you will still need to :

  • Create a new branch by the name gh-pages
  • Github pages should be enabled by default but if not go to Settings -> GitHub Pages and enable it on your gh-pages branch. You will also find the link to your to-be hosted page there
  • Make a personal access token (only the token no need for command line here)
  • Next we will need to put this token into our travis settings, go to more options -> settings -> Environment Variables and enter the token value (the generated token code) and name as GITHUB_TOKEN, it should look like : token

Additional :

Download Details:
Author: sn99
Source Code: https://github.com/sn99/wasm-template-rust
License: View license

#rust  #rustlang  #deployment  #wasm 

Wasm Template for Rust Hosting without Npm-deploy
Awesome  Rust

Awesome Rust

1649956140

Docker Images for Compiling Static Rust Binaries using Musl Cross

rust-musl-cross

🚀 Help me to become a full-time open-source developer by sponsoring me on GitHub

Docker images for compiling static Rust binaries using musl-cross-make, inspired by rust-musl-builder

Prebuilt images

Currently we have the following prebuilt Docker images on Docker Hub, supports x86_64(amd64) and aarch64(arm64) architectures.

Rust toolchainCross Compile TargetDocker Image Tag
stableaarch64-unknown-linux-muslaarch64-musl
stablearm-unknown-linux-musleabiarm-musleabi
stablearm-unknown-linux-musleabihfarm-musleabihf
stablearmv5te-unknown-linux-musleabiarmv5te-musleabi
stablearmv7-unknown-linux-musleabiarmv7-musleabi
stablearmv7-unknown-linux-musleabihfarmv7-musleabihf
stablei586-unknown-linux-musli586-musl
stablei686-unknown-linux-musli686-musl
stablemips-unknown-linux-muslmips-musl
stablemipsel-unknown-linux-muslmipsel-musl
stablemips64-unknown-linux-muslabi64mips64-muslabi64
stablemips64el-unknown-linux-muslabi64mips64el-muslabi64
nightlypowerpc64le-unknown-linux-muslpowerpc64le-musl
stablex86_64-unknown-linux-muslx86_64-musl

To use armv7-unknown-linux-musleabihf target for example, first pull the image:

docker pull messense/rust-musl-cross:armv7-musleabihf

Then you can do:

alias rust-musl-builder='docker run --rm -it -v "$(pwd)":/home/rust/src messense/rust-musl-cross:armv7-musleabihf'
rust-musl-builder cargo build --release

This command assumes that $(pwd) is readable and writable. It will output binaries in armv7-unknown-linux-musleabihf. At the moment, it doesn't attempt to cache libraries between builds, so this is best reserved for making final release builds.

How it works

rust-musl-cross uses musl-libc, musl-gcc with the help of musl-cross-make to make it easy to compile, and the new rustup target support.

Use beta/nightly Rust

Currently we install stable Rust by default, if you want to switch to beta/nightly Rust, you can do it by extending from our Docker image, for example to use beta Rust for target x86_64-unknown-linux-musl:

FROM messense/rust-musl-cross:x86_64-musl
RUN rustup update beta && \
    rustup target add --toolchain beta x86_64-unknown-linux-musl

Strip binaries

You can use the musl-strip command inside the image to strip binaries, for example:

docker run --rm -it -v "$(pwd)":/home/rust/src messense/rust-musl-cross:armv7-musleabihf musl-strip /home/rust/src/target/release/example

Download Details:
Author: messense
Source Code: https://github.com/messense/rust-musl-cross
License: View license

#rust  #rustlang  #deployment  #docker 

Docker Images for Compiling Static Rust Binaries using Musl Cross
Awesome  Rust

Awesome Rust

1649948820

Cargo Chef: Speed Up Rust Docker Builds using Docker Layer Caching

How To Install

You can install cargo-chef from crates.io with

cargo install cargo-chef --locked

How to use

:warning: cargo-chef is not meant to be run locally
Its primary use-case is to speed up container builds by running BEFORE the actual source code is copied over. Don't run it on existing codebases to avoid having files being overwritten.

cargo-chef exposes two commands: prepare and cook:

cargo chef --help

cargo-chef

USAGE:
    cargo chef <SUBCOMMAND>

SUBCOMMANDS:
    cook       Re-hydrate the minimum project skeleton identified by `cargo chef prepare` and
               build it to cache dependencies
    prepare    Analyze the current project to determine the minimum subset of files (Cargo.lock
               and Cargo.toml manifests) required to build it and cache dependencies

prepare examines your project and builds a recipe that captures the set of information required to build your dependencies.

cargo chef prepare --recipe-path recipe.json

Nothing too mysterious going on here, you can examine the recipe.json file: it contains the skeleton of your project (e.g. all the Cargo.toml files with their relative path, the Cargo.lock file is available) plus a few additional pieces of information.
In particular it makes sure that all libraries and binaries are explicitly declared in their respective Cargo.toml files even if they can be found at the canonical default location (src/main.rs for a binary, src/lib.rs for a library).

The recipe.json is the equivalent of the Python requirements.txt file - it is the only input required for cargo chef cook, the command that will build out our dependencies:

cargo chef cook --recipe-path recipe.json

If you want to build in --release mode:

cargo chef cook --release --recipe-path recipe.json

You can leverage it in a Dockerfile:

FROM lukemathwalker/cargo-chef:latest-rust-1.56.0 AS chef
WORKDIR app

FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json

FROM chef AS builder 
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
RUN cargo build --release --bin app

# We do not need the Rust toolchain to run the binary!
FROM debian:buster-slim AS runtime
WORKDIR app
COPY --from=builder /app/target/release/app /usr/local/bin
ENTRYPOINT ["/usr/local/bin/app"]

We are using three stages: the first computes the recipe file, the second caches our dependencies and builds the binary, the third is our runtime environment.
As long as your dependencies do not change the recipe.json file will stay the same, therefore the outcome of cargo cargo chef cook --release --recipe-path recipe.json will be cached, massively speeding up your builds (up to 5x measured on some commercial projects).

Pre-built images

We offer lukemathwalker/cargo-chef as a pre-built Docker image equipped with both Rust and cargo-chef.

The tagging scheme is <cargo-chef version>-rust-<rust version>.
For example, 0.1.22-rust-1.56.0.
You can choose to get the latest version of either cargo-chef or rust by using:

  • latest-rust-1.56.0 (use latest cargo-chef with specific Rust version);
  • 0.1.22-rust-latest (use latest Rust with specific cargo-chef version). You can find all the available tags on Dockerhub.

:warning: You must use the same Rust version in all stages
If you use a different Rust version in one of the stages caching will not work as expected.

Without the pre-built image

If you do not want to use the lukemathwalker/cargo-chef image, you can simply install the CLI within the Dockerfile:

FROM rust:1.56.0 AS chef 
# We only pay the installation cost once, 
# it will be cached from the second build onwards
RUN cargo install cargo-chef 
WORKDIR app

FROM chef AS planner
COPY . .
RUN cargo chef prepare  --recipe-path recipe.json

FROM chef AS builder
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
RUN cargo build --release --bin app

# We do not need the Rust toolchain to run the binary!
FROM debian:buster-slim AS runtime
WORKDIR app
COPY --from=builder /app/target/release/app /usr/local/bin
ENTRYPOINT ["/usr/local/bin/app"]

Running the binary in Alpine

If you want to run your application using the alpine distribution you need to create a fully static binary.
The recommended approach is to build for the x86_64-unknown-linux-musl target using muslrust.
cargo-chef works for x86_64-unknown-linux-musl, but we are cross-compiling - the target toolchain must be explicitly specified.

A sample Dockerfile looks like this:

# Using the `rust-musl-builder` as base image, instead of 
# the official Rust toolchain
FROM clux/muslrust:stable AS chef
USER root
RUN cargo install cargo-chef
WORKDIR /app

FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json

FROM chef AS builder
COPY --from=planner /app/recipe.json recipe.json
# Notice that we are specifying the --target flag!
RUN cargo chef cook --release --target x86_64-unknown-linux-musl --recipe-path recipe.json
COPY . .
RUN cargo build --release --target x86_64-unknown-linux-musl --bin app

FROM alpine AS runtime
RUN addgroup -S myuser && adduser -S myuser -G myuser
COPY --from=builder /app/target/x86_64-unknown-linux-musl/release/app /usr/local/bin/
USER myuser
CMD ["/usr/local/bin/app"]

Benefits vs Limitations

cargo-chef has been tested on a few OpenSource projects and some of commercial projects, but our testing has definitely not exhausted the range of possibilities when it comes to cargo build customisations and we are sure that there are a few rough edges that will have to be smoothed out - please file issues on GitHub.

Benefits of cargo-chef:

A common alternative is to load a minimal main.rs into a container with Cargo.toml and Cargo.lock to build a Docker layer that consists of only your dependencies (more info here). This is fragile compared to cargo-chef which will instead:

  • automatically pick up all crates in a workspace (and new ones as they are added)
  • keep working when files or crates are moved around, which would instead require manual edits to the Dockerfile using the "manual" approach
  • generate fewer intermediate Docker layers (for workspaces)

Limitations and caveats:

  • cargo cook and cargo build must be executed from the same working directory. If you examine the *.d files under target/debug/deps for one of your projects using cat you will notice that they contain absolute paths referring to the project target directory. If moved around, cargo will not leverage them as cached dependencies;
  • cargo build will build local dependencies (outside of the current project) from scratch, even if they are unchanged, due to the reliance of its fingerprinting logic on timestamps (see this long issue on cargo's repository);

Download Details:
Author: LukeMathWalker
Source Code: https://github.com/LukeMathWalker/cargo-chef
License: View license

#rust  #rustlang  #deployment  #docker 

Cargo Chef: Speed Up Rust Docker Builds using Docker Layer Caching
Awesome  Rust

Awesome Rust

1649934120

Mini Docker Rust: Very Small Rust Docker Image

mini-docker-rust

Very small rust docker image.

This is an example project on how to build very small docker images for a rust project. The resulting image for a working hello world was about 6.01MB during my tests.

This repo is trying to keep the docker overhead to a minimum without sacrificing performance or the usability implications of using FROM scratch. If you want to reduce the binary size further you might be interested in johnthagen/min-sized-rust.

See for yourself

You don't need to install anything besides docker. Build with docker build -t mini-docker-rust . and run with docker run mini-docker-rust.

Annotated docker file

See Dockerfile.

Download Details:
Author: kpcyrd
Source Code: https://github.com/kpcyrd/mini-docker-rust
License: MIT License

#rust  #rustlang  #deployment 

Mini Docker Rust: Very Small Rust Docker Image
Awesome  Rust

Awesome Rust

1649926800

Rust Musl Builder: Docker Container for Building Static Rust Binaries

rust-musl-builder: Docker container for easily building static Rust binaries

UPDATED: We are now running builds on GitHub, including scheduled builds of stable and beta every Thursday!

However, rustls now works well with most of the Rust ecosystem, including reqwest, tokio, tokio-postgres, sqlx and many others. The only major project which still requires libpq and OpenSSL is Diesel. If you don't need diesel or libpq:

  • See if you can switch away from OpenSSL, typically by using features in Cargo.toml to ask your dependencies to use rustls instead.
  • If you don't need OpenSSL, try cross build --target=x86_64-unknown-linux-musl --release to cross-compile your binaries for libmusl. This supports many more platforms, with less hassle!

What is this?

This image allows you to build static Rust binaries using diesel, sqlx or openssl. These images can be distributed as single executable files with no dependencies, and they should work on any modern Linux system.

To try it, run:

alias rust-musl-builder='docker run --rm -it -v "$(pwd)":/home/rust/src ekidd/rust-musl-builder'
rust-musl-builder cargo build --release

This command assumes that $(pwd) is readable and writable by uid 1000, gid 1000. At the moment, it doesn't attempt to cache libraries between builds, so this is best reserved for making final release builds.

For a more realistic example, see the Dockerfiles for examples/using-diesel and examples/using-sqlx.

Deploying your Rust application

With a bit of luck, you should be able to just copy your application binary from target/x86_64-unknown-linux-musl/release, and install it directly on any reasonably modern x86_64 Linux machine. In particular, you should be able make static release binaries using TravisCI and GitHub, or you can copy your Rust application into an Alpine Linux container. See below for details!

Available tags

In general, we provide the following tagged Docker images:

  • latest, stable: Current stable Rust, now with OpenSSL 1.1. We try to update this fairly rapidly after every new stable release, and after most point releases.
  • X.Y.Z: Specific versions of stable Rust.
  • beta: This usually gets updated every six weeks alongside the stable release. It will usually not be updated for beta bugfix releases.
  • nightly-YYYY-MM-DD: Specific nightly releases. These should almost always support clippy, rls and rustfmt, as verified using rustup components history. If you need a specific date for compatibility with tokio or another popular library using unstable Rust, please file an issue.

At a minimum, each of these images should be able to compile examples/using-diesel and examples/using-sqlx.

Caching builds

You may be able to speed up build performance by adding the following -v commands to the rust-musl-builder alias:

-v cargo-git:/home/rust/.cargo/git
-v cargo-registry:/home/rust/.cargo/registry
-v target:/home/rust/src/target

You will also need to fix the permissions on the mounted volumes:

rust-musl-builder sudo chown -R rust:rust \
  /home/rust/.cargo/git /home/rust/.cargo/registry /home/rust/src/target

How it works

rust-musl-builder uses musl-libc, musl-gcc, and the new rustup target support. It includes static versions of several libraries:

  • The standard musl-libc libraries.
  • OpenSSL, which is needed by many Rust applications.
  • libpq, which is needed for applications that use diesel with PostgreSQL.
  • libz, which is needed by libpq.
  • SQLite3. See examples/using-diesel.

This library also sets up the environment variables needed to compile popular Rust crates using these libraries.

Extras

This image also supports the following extra goodies:

  • Basic compilation for armv7 using musl-libc. Not all libraries are supported at the moment, however.
  • mdbook and mdbook-graphviz for building searchable HTML documentation from Markdown files. Build manuals to use alongside your cargo doc output!
  • cargo about to collect licenses for your dependencies.
  • cargo deb to build Debian packages
  • cargo deny to check your Rust project for known security issues.

Making OpenSSL work

If your application uses OpenSSL, you will also need to take a few extra steps to make sure that it can find OpenSSL's list of trusted certificates, which is stored in different locations on different Linux distributions. You can do this using openssl-probe as follows:

fn main() {
    openssl_probe::init_ssl_cert_env_vars();
    //... your code
}

Making Diesel work

In addition to setting up OpenSSL, you'll need to add the following lines to your Cargo.toml:

[dependencies]
diesel = { version = "1", features = ["postgres", "sqlite"] }

# Needed for sqlite.
libsqlite3-sys = { version = "*", features = ["bundled"] }

# Needed for Postgres.
openssl = "*"

For PostgreSQL, you'll also need to include diesel and openssl in your main.rs in the following order (in order to avoid linker errors):

extern crate openssl;
#[macro_use]
extern crate diesel;

If this doesn't work, you might be able to fix it by reversing the order. See this PR for a discussion of the latest issues involved in linking to diesel, pq-sys and openssl-sys.

Making static releases with Travis CI and GitHub

These instructions are inspired by rust-cross.

First, read the Travis CI: GitHub Releases Uploading page, and run travis setup releases as instructed. Then add the following lines to your existing .travis.yml file, replacing myapp with the name of your package:

language: rust
sudo: required
os:
- linux
- osx
rust:
- stable
services:
- docker
before_deploy: "./build-release myapp ${TRAVIS_TAG}-${TRAVIS_OS_NAME}"
deploy:
  provider: releases
  api_key:
    secure: "..."
  file_glob: true
  file: "myapp-${TRAVIS_TAG}-${TRAVIS_OS_NAME}.*"
  skip_cleanup: true
  on:
    rust: stable
    tags: true

Next, copy build-release into your project and run chmod +x build-release.

Finally, add a Dockerfile to perform the actual build:

FROM ekidd/rust-musl-builder

# We need to add the source code to the image because `rust-musl-builder`
# assumes a UID of 1000, but TravisCI has switched to 2000.
ADD --chown=rust:rust . ./

CMD cargo build --release

When you push a new tag to your project, build-release will automatically build new Linux binaries using rust-musl-builder, and new Mac binaries with Cargo, and it will upload both to the GitHub releases page for your repository.

For a working example, see faradayio/cage.

Making tiny Docker images with Alpine Linux and Rust binaries

Docker now supports multistage builds, which make it easy to build your Rust application with rust-musl-builder and deploy it using Alpine Linux. For a working example, see examples/using-diesel/Dockerfile.

Adding more C libraries

If you're using Docker crates which require specific C libraries to be installed, you can create a Dockerfile based on this one, and use musl-gcc to compile the libraries you need. For an example, see examples/adding-a-library/Dockerfile. This usually involves a bit of experimentation for each new library, but it seems to work well for most simple, standalone libraries.

If you need an especially common library, please feel free to submit a pull request adding it to the main Dockerfile! We'd like to support popular Rust crates out of the box.

Development notes

After modifying the image, run ./test-image to make sure that everything works.

Other ways to build portable Rust binaries

If for some reason this image doesn't meet your needs, there's a variety of other people working on similar projects:

Download Details:
Author: emk
Source Code: https://github.com/emk/rust-musl-builder
License: View license

#rust  #rustlang  #deployment #docker 

Rust Musl Builder: Docker Container for Building Static Rust Binaries

Discharge: Easily deploy static websites to Amazon S3

A simple, easy way to deploy static websites to Amazon S3

 screenshot

Features

  • Very little understanding of AWS required
  • Interactive UI for configuring deployment
  • Step-by-step list of what’s happening
  • Support for clean URLs (no .html extensions)
  • Support for subdomains
  • Use an AWS Profile (named credentials) to authenticate with AWS
  • CDN (CloudFront) and HTTPS/TLS support

Installation

Install it globally:

$ npm install --global @static/discharge

Or add it to your application’s package.json:

$ npm install --save-dev @static/discharge

Usage

Authentication

Credentials in file

Configuring AWS credentials can be a bit confusing. After getting your Access Key ID and Secret Access Key from AWS, you should store them in a file at ~/.aws/credentials. It should look something like this:

[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Replace the example keys with your own.

Credentials in environment

Alternatively, if you prefer environment variables or you are running Discharge in an automated environment like a continuous integration/deployment server you can omit the aws_profile configuration option explained later and set environment variables instead.

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Replace the example keys with your own.

Configure

Configuration is done via a .discharge.json file located at the root of your application. You can run discharge init to get an interactive UI that will help you generate the configuration file, or you can write it yourself from scratch. It will look something like this:

{
  "domain": "anti-pattern.com",
  "build_command": "bundle exec middleman build",
  "upload_directory": "build",
  "index_key": "index.html",
  "error_key": "404.html",
  "cache": 3600,
  "aws_profile": "website-deployment",
  "aws_region": "us-west-1",
  "cdn": true,
  "dns_configured": false
}

Those are most of the configuration options but a complete list is next.

Configuration options

There are no defaults—all configuration options are explicit and must be provided unless marked as optional.

domain String

The domain name of your website. This will be used as the name of the S3 bucket your website will be uploaded to.

build_command String

The command that will be executed in the shell to build your static website.

upload_directory String

The name of the directory that the build_command generated with the static files in it. This is the directory that will be uploaded to S3.

index_key String

The key of the document to respond with at the root of the website. index.html is almost certainly what you want to use. For example, if https://example.com is requested, https://example.com/index.html will be returned.

error_key String

The key of the document to respond with if the website endpoint responds with a 404 Not Found. For example, 404.html is pretty common.

cache Number (optional when cache_control is set)

The number of seconds a browser should cache the files of your website for. This is a simplified version of the HTTP Cache-Control header. If you set it to 0 the Cache-Control will be set to "no-cache, no-store, must-revalidate". If you set it to a positive number, say, 3600, the Cache-Control will be set to "public, max-age=3600".

Be careful about setting too high a cache length. If you do, when a browser caches it, if you then update the content, that browser will not get the updated content unless the user specifically hard-refreshes the page.

When cdn is enabled, the s-maxage directive is included and set to a very high number (one month). It is recommended you set cache to a very low number (e.g five minutes). The CDN will use the s-maxage directive and the browser will use the max-age directive. This works because when you deploy the CDN’s cache will be automatically expired. For more information see the distribute command.

If you need finer-grained control over the Cache-Control header, use the cache_control configuration option.

cache_control String (optional)

A Cache-Control directive as described in the HTTP documentation. This is for more advanced, finer-grained control of caching. If you don’t need that, use the cache configuration option.

The s-maxage directive added to cache when cdn is enabled is not added here—you have to do it yourself. Caveat emptor.

redirects Array<Object> (optional)

prefix_match String

The URL path prefix to match on. The redirects are matched in order, so if you have two paths with similar parts, like some/page and some, make sure you put the more specific path first.

destination String

The path to redirect to if the prefix_match matches.

AWS does not allow the prefix_match and destination to start with a forward slash (/some/page). You can include them in the configuration for your convenience, but the forward slashes will be invisibly removed when configuring the bucket.

If you need finer-grained control over the routing rules, use the routing_rules configuration option.

routing_rules Array<Object> (optional)

If the redirects configuration is not enough, you can declare more complex routing rules. There are some horrible AWS docs that explain the available options and here’s an example of the syntax from the AWS JavaScript docs.

[
  {
    Redirect: { /* required */
      HostName: "STRING",
      HttpRedirectCode: "STRING",
      Protocol: "http" || "https",
      ReplaceKeyPrefixWith: "STRING",
      ReplaceKeyWith: "STRING"
    },
    Condition: {
      HttpErrorCodeReturnedEquals: "STRING",
      KeyPrefixEquals: "STRING"
    }
  },
  /* more items */
]

The unusual property casing is intentional—the entire configuration will be passed directly through in the HTTP request.

cdn: Boolean

Set this to true if you want to use a CDN and HTTPS/TLS. Setting up the CDN does not happen automatically when deploying. After deploying, run discharge distribute to set up the CDN. Once the CDN is set up, future deploys will expire the CDN’s cache.

For more information see the cache configuration or the distribute command.

aws_profile String (optional)

The AWS profile you’ve specified in a credentials file at ~/.aws/credentials.

If you only have one set of credentials then specify “default”.

If you want to create a new AWS user with specific permissions/policies for deployment, you can add another profile in the credentials file and specify the custom profile you’ve added.

If you prefer environment variables or you are running Discharge in an automated environment like a continuous integration/deployment server you can omit this configuration option.

aws_region String

The Amazon S3 region you want to create your website (bucket) in.

dns_configured Boolean

If you run discharge init this will be set to false automatically. Then when you run discharge deploy it will show the record you need to add to your DNS configuration. The deploy command will then automatically set this value to true, assuming you have properly created the DNS record.

Deploy

After you’ve finished configuring you can run discharge deploy to deploy. Deploying is a series of steps that are idempotent—that is, they are safe to run over and over again, and if you haven’t changed anything, then the outcome should always be the same.

If you change your website configuration (cache, redirects, etc.) it will be updated. If you change your website content, a diff will be done to figure out what needs to change. New files will be added, changed files will be updated, and deleted files will be removed. The synchronization is one way—that is, if you remove a file from S3 it will just be re-uploaded the next time you deploy.

Clean URLs

Clean URLs are when the .html extensions are dropped from URLs for aesthetic or functional reasons. The .html extensions are now commonly considered superfluous. If you have a file named /projects.html it’s now understood and generally preferred that the URL domain.com/projects would serve that file.

When you deploy, two copies of each HTML file will be uploaded: one with the .html extension and one without. So a file some-page.html will be uploaded as some-page.html and as some-page, which will allow it to be served from https://example.com/some-page.html, with the extension, or from https://example.com/some-page, without the extension. You are free to use whichever URL style you prefer!

Distribute

After you’ve finished deploying you can run discharge distribute to distribute your website via a CDN (content delivery network). The command will create a TLS certificate, ensure it’s verified, create a distribution, and ensure it’s deployed. Almost no configuration necessary[1]. This step is completely optional, but if you have a high-traffic website it’s highly recommended, and if you want to secure your website with HTTPS/TLS then you have to do it[2].

A CDN is a caching layer. It can significantly speed up requests for users located geographically farther from where your website is deployed, and sometimes even for users nearby it. In brief, the way a CDN works is you point your DNS to the CDN. When a request comes in, the CDN relays the request to your origin (in this case S3) then takes the response and caches it according to the Cache-Control header in the response. Future requests will only hit the CDN and not your origin, until either the CDN’s cache expires or it’s expired early.

The Cache-Control header can specify two different cache lengths, one for the CDN and one for the browser. Because static sites are… static, the only times they change are when deployed, so it’s safe to set a very high cache length for the CDN, a low cache length for the browser, and then expire the CDN’s cache early when deploying.

[1]: CDNs can be configured in a lot of different, complex ways. The goal was to abstract away all of that—choose sane defaults and require no configuration. I think this will work for the vast majority of people, but if there’s a specific reason you need more flexibility let me know, and if it’s widely-needed we can add it.

[2]: While CDNs can be configured without TLS, given that TLS certificates are free and we want the entire web to be encrypted, I can’t see any reason to support not using TLS.

.io domains

Verifying the TLS certificate is done via email. AWS will look up the contact information in the WHOIS database for your domain and then send a verification email to the following email addresses:

Inexplicably, the .io domain registrar is the only registrar that does not return contact information from the WHOIS database. That means you have to have one of the five common system email addresses set up on a .io domain or you will not receive the TLS certificate verification email.

Subdomains

You can use any domain, subdomain, or combination you like. You just need to configure your DNS appropriately.

If you want to use a naked domain (domain.com), because S3 and CloudFront expose a special URL rather than an IP address, your DNS provider will need to support ALIAS records; not all do.

If you want to use a subdomain like www.domain.com or blog.domain.com, create a CNAME record for it. The TLS/HTTPS certificate is created for the root domain and all subdomains via a wildcard.

If you want to use both a naked domain and a subdomain, create an ALIAS and a CNAME record.

If you want to use only a naked domain or a subdomain, but redirect one to the other (like redirect www.domain.com to domain.com), then the easiest way to do that is to add a redirect at the DNS-level. It’s not technically a part of the DNS specification so not all DNS providers have it, but the vast majority do. If yours does not, you can either switch to a DNS provider that does or manually create an S3 bucket that does the redirect and create an ALIAS or CNAME record pointing to it.

Contributing

Bug reports and pull requests are welcome on GitHub at https://github.com/brandonweiss/discharge.

Author: Brandonweiss
Source Code: https://github.com/brandonweiss/discharge 
License: MIT License

#node #aws #deployment 

Discharge: Easily deploy static websites to Amazon S3

Deploy A Cryptocurrency Trading Bot in The Cloud [GCP] using Python

If you find this useful or interesting, please consider subscribing and hit the like button. Thanks in advance :-)

Disclaimer: This video is not an investment advice and is for educational and entertainment purposes only! Cryptocurrency and automated trading is bearing a high amount of risk which might result in a total loss of your invested capital.

00:00 - 01:45 Introduction / Disclaimer
01:45 - 03:39 Comparison between old and new streaming code
03:39 - 09:18 Handling the Websocket stream
09:18 - 14:16 Deploying the Tradingstream in GCP
14:16 - 16:51 Comparison between old and new trading code
16:51 - 20:50 Detailed walkthrough the new trading code
20:50 - 24:40 Deploy the Trading Code in GCP
24:40 - 26:47 Trading / Discussing Trades

#python  #crypto  #binance  #deployment 

Deploy A Cryptocurrency Trading Bot in The Cloud [GCP] using Python