Install / Configure Docker and Docker Compose using Ansible

What is ansible-docker?

It is an Ansible role to:

  • Install Docker (editions, channels and version pinning are all supported)
  • Install Docker Compose v1 and Docker Compose v2 (version pinning is supported)
  • Install the docker PIP package so Ansible's docker_* modules work
  • Manage Docker registry login credentials
  • Configure 1 or more users to run Docker without needing root access
  • Configure the Docker daemon's options and environment variables
  • Configure a cron job to run Docker clean up commands

Why would you want to use this role?

If you're like me, you probably love Docker. This role provides everything you need to get going with a production ready Docker host.

By the way, if you don't know what Docker is, or are looking to become an expert with it then check out Dive into Docker: The Complete Docker Course for Developers.

Supported platforms

  • Ubuntu 20.04 LTS (Focal Fossa)
  • Ubuntu 22.04 LTS (Jammy Jellyfish)
  • Debian 10 (Buster)
  • Debian 11 (Bullseye)

You are viewing the master branch's documentation which might be ahead of the latest release. Switch to the latest release.


Quick start

The philosophy for all of my roles is to make it easy to get going, but provide a way to customize nearly everything.

What's configured by default?

The latest Docker CE, Docker Compose v1 and Docker Compose v2 will be installed, Docker disk clean up will happen once a week and Docker container logs will be sent to journald.

Example playbook

---

# docker.yml

- name: Example
  hosts: "all"
  become: true

  roles:
    - role: "nickjj.docker"
      tags: ["docker"]

Usage: ansible-playbook docker.yml

Installation

$ ansible-galaxy install nickjj.docker

Default role variables

Installing Docker

Edition

Do you want to use "ce" (community edition) or "ee" (enterprise edition)?

docker__edition: "ce"

Channel

Do you want to use the "stable", "edge", "testing" or "nightly" channels? You can add more than one (order matters).

docker__channel: ["stable"]

Version

  • When set to "", the current latest version of Docker will be installed
  • When set to a specific version, that version of Docker will be installed and pinned
docker__version: ""

# For example, pin it to 20.10.
docker__version: "20.10"

# For example, pin it to a more precise version of 20.10.
docker__version: "20.10.17"

Pins are set with * at the end of the package version so you will end up getting minor and security patches unless you pin an exact version.

Upgrade strategy

  • When set to "present", running this role in the future won't install newer versions (if available)
  • When set to "latest", running this role in the future will install newer versions (if available)
docker__state: "present"

Downgrade strategy

The easiest way to downgrade would be to uninstall the Docker package manually and then run this role afterwards while pinning whatever specific Docker version you want.

# An ad-hoc Ansible command to stop and remove the Docker CE package on all hosts.
ansible all -m systemd -a "name=docker-ce state=stopped" \
  -m apt -a "name=docker-ce autoremove=true purge=true state=absent" -b

Installing Docker Compose v2

Docker Compose v2 will get apt installed using the official docker-compose-plugin that Docker manages.

Version

  • When set to "", the current latest version of Docker Compose v2 will be installed
  • When set to a specific version, that version of Docker Compose v2 will be installed and pinned
docker__compose_v2_version: ""

# For example, pin it to 2.6.
docker__compose_v2_version: "2.6"

# For example, pin it to a more precise version of 2.6.
docker__compose_v2_version: "2.6.0"

Upgrade strategy

It'll re-use the docker__state variable explained above in the Docker section with the same rules.

Downgrade strategy

Like Docker itself, the easiest way to uninstall Docker Compose v2 is to manually run the command below and then pin a specific Docker Compose v2 version.

# An ad-hoc Ansible command to remove the Docker Compose Plugin package on all hosts.
ansible all -m apt -a "name=docker-compose-plugin autoremove=true purge=true state=absent" -b

Installing Docker Compose v1

Docker Compose v1 will get PIP installed inside of a Virtualenv. If you plan to use Docker Compose v2 instead it will be very easy to skip installing v1 although technically both can be installed together since v1 is accessed with docker-compose and v2 is accessed with docker compose (notice the lack of hyphen).

In any case details about this is covered in detail in a later section of this README file.

Version

  • When set to "", the current latest version of Docker Compose v1 will be installed
  • When set to a specific version, that version of Docker Compose v1 will be installed and pinned
docker__compose_version: ""

# For example, pin it to 1.29.
docker__compose_version: "1.29"

# For example, pin it to a more precise version of 1.29.
docker__compose_version: "1.29.2"

Upgrade and downgrade strategies will be explained in the other section of this README.

Configuring users to run Docker without root

A list of users to be added to the docker group.

Keep in mind this user needs to already exist, this role will not create it. If you want to create users, check out my user role.

This role does not configure User Namespaces or any other security features by default. If the user you add here has SSH access to your server then you're effectively giving them root access to the server since they can run Docker without sudo and volume mount in any path on your file system.

In a controlled environment this is safe, but like anything security related it's worth knowing this up front. You can enable User Namespaces and any other options with the docker__daemon_json variable which is explained later.

# Try to use the sudo user by default, but fall back to root.
docker__users: ["{{ ansible_env.SUDO_USER | d('root') }}"]

# For example, if the user you want to set is different than the sudo user.
docker__users: ["admin"]

Configuring Docker registry logins

Login to 1 or more Docker registries (such as the Docker Hub).

# Your login credentials will end up in this user's home directory.
docker__login_become_user: "{{ docker__users | first | d('root') }}"
# 0 or more registries to log into.
docker__registries:
  - #registry_url: "https://index.docker.io/v1/"
    username: "your_docker_hub_username"
    password: "your_docker_hub_password"
    #email: "your_docker_hub@emailaddress.com"
    #reauthorize: false
    #config_path: "$HOME/.docker/config.json"
    #state: "present"
docker__registries: []

Properties prefixed with * are required.

  • registry_url defaults to https://index.docker.io/v1/
  • *username is your Docker registry username
  • *password is your Docker registry password
  • email defaults to not being used (not all registries use it)
  • reauthorize defaults to false, when true it updates your credentials
  • config_path defaults to your docker__login_become_user's $HOME directory
  • state defaults to "present", when "absent" the login will be removed

Configuring the Docker daemon options (json)

Default Docker daemon options as they would appear in /etc/docker/daemon.json.

docker__default_daemon_json: |
  "log-driver": "journald",
  "features": {
    "buildkit": true
  }

# Add your own additional daemon options without overriding the default options.
# It follows the same format as the default options, and don't worry about
# starting it off with a comma. The template will add the comma if needed.
docker__daemon_json: ""

Configure the Docker daemon options (flags)

Flags that are set when starting the Docker daemon cannot be changed in the daemon.json file. By default Docker sets -H unix:// which means that option cannot be changed with the json options.

Add or change the starting Docker daemon flags by supplying them exactly how they would appear on the command line.

# Each command line flag should be its own item in the list.
#
# Using a Docker version prior to 18.09?
#   You must set `-H fd://` instead of `-H unix://`.
docker__daemon_flags:
  - "-H unix://"

If you don't supply some type of -H flag here, Docker will fail to start.

Configuring the Docker daemon environment variables

docker__daemon_environment: []

# For example, here's how to set a couple of proxy environment variables.
docker__daemon_environment:
  - "HTTP_PROXY=http://proxy.example.com:80"
  - "HTTPS_PROXY=https://proxy.example.com:443"

Configuring advanced systemd directives

This role lets the Docker package manage its own systemd unit file and adjusts things like the Docker daemon flags and environment variables by using the systemd override pattern.

If you know what you're doing, you can override or add to any of Docker's systemd directives by setting this variable. Anything you place in this string will be written to /etc/systemd/system/docker.service.d/custom.conf as is.

docker__systemd_override: ""

Configuring Docker related cron jobs

By default this will safely clean up disk space used by Docker every Sunday at midnight.

# `a` removes unused images (useful in production).
# `f` forces it to happen without prompting you to agree.
docker__cron_jobs_prune_flags: "af"

# Control the schedule of the docker system prune.
docker__cron_jobs_prune_schedule: ["0", "0", "*", "*", "0"]

docker__cron_jobs:
  - name: "Docker disk clean up"
    job: "docker system prune -{{ docker__cron_jobs_prune_flags }} > /dev/null 2>&1"
    schedule: "{{ docker__cron_jobs_prune_schedule }}"
    cron_file: "docker-disk-clean-up"
    #user: "{{ (docker__users | first) | d('root') }}"
    #state: "present"

Properties prefixed with * are required.

  • *name is the cron job's description
  • *job is the command to run in the cron job
  • *schedule is the standard cron job format for every Sunday at midnight
  • *cron_file writes a cron file to /etc/cron.d instead of a user's individual crontab
  • user defaults to the first docker__users user or root if that's not available
  • state defaults to "present", when "absent" the cron file will be removed

Configuring the APT package manager

Docker requires a few dependencies to be installed for it to work. You shouldn't have to edit any of these variables.

# List of packages to be installed.
docker__package_dependencies:
  - "apt-transport-https"
  - "ca-certificates"
  - "cron"
  - "gnupg2"
  - "software-properties-common"

# Ansible identifies CPU architectures differently than Docker.
docker__architecture_map:
  "x86_64": "amd64"
  "aarch64": "arm64"
  "aarch": "arm64"
  "armhf": "armhf"
  "armv7l": "armhf"

# The Docker GPG key id used to sign the Docker package.
docker__apt_key_id: "9DC858229FC7DD38854AE2D88D81803C0EBFCD88"

# The Docker GPG key server address.
docker__apt_key_url: "https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg"

# The Docker upstream APT repository.
docker__apt_repository: >
  deb [arch={{ docker__architecture_map[ansible_architecture] }}]
  https://download.docker.com/linux/{{ ansible_distribution | lower }}
  {{ ansible_distribution_release }} {{ docker__channel | join (' ') }}

Installing Python packages with Virtualenv and PIP

Configuring Virtualenv

Rather than pollute your server's version of Python, all PIP packages are installed into a Virtualenv of your choosing.

docker__pip_virtualenv: "/usr/local/lib/docker/virtualenv"

Installing PIP and its dependencies

This role installs PIP because Docker Compose v1 is installed with the docker-compose PIP package and Ansible's docker_* modules use the docker PIP package.

docker__pip_dependencies:
  - "gcc"
  - "python3-setuptools"
  - "python3-dev"
  - "python3-pip"
  - "virtualenv"

Installing PIP packages

docker__default_pip_packages:
  - name: "docker"
    state: "{{ docker__pip_docker_state }}"
  - name: "docker-compose"
    version: "{{ docker__compose_version }}"
    path: "/usr/local/bin/docker-compose"
    src: "{{ docker__pip_virtualenv + '/bin/docker-compose' }}"
    state: "{{ docker__pip_docker_compose_state }}"

# Add your own PIP packages with the same properties as above.
docker__pip_packages: []

Properties prefixed with * are required.

  • *name is the package name
  • version is the package version to be installed (or "" if this is not defined)
  • path is the destination path of the symlink
  • src is the source path to be symlinked
  • state defaults to "present", other values can be "forcereinstall" or "absent"

PIP package state

  • When set to "present", the package will be installed but not updated on future runs
  • When set to "forcereinstall", the package will always be (re)installed and updated on future runs
  • When set to "absent", the package will be removed
docker__pip_docker_state: "present"
docker__pip_docker_compose_state: "present"

Skipping the installation of Docker Compose v1

You can set docker__pip_docker_compose_state: "absent" in your inventory. That's it!

Honestly, in the future I think this will be the default behavior. Since Docker Compsose v2 is still fairly new I wanted to ease into using v2. There's also no harm in having both installed together. You can pick which one to use.

Working with Ansible's docker_* modules

This role uses docker_login to login to a Docker registry, but you may also use the other docker_* modules in your own roles. They are not going to work unless you instruct Ansible to use this role's Virtualenv.

At either the inventory, playbook or task level you'll need to set ansible_python_interpreter: "/usr/bin/env python3-docker". This works because this role creates a proxy script from the Virtualenv's Python binary to python3-docker.

You can look at this role's docker_login task as an example on how to do it at the task level.


Download Details:

Author: nickjj
Source Code: https://github.com/nickjj/ansible-docker

License: MIT license

#docker #ansible 

Install / Configure Docker and Docker Compose using Ansible
Lawrence  Lesch

Lawrence Lesch

1662424980

ESLint Shareable Config for React/JSX Support in JavaScript

eslint-config-standard-react

An ESLint Shareable Config for React/JSX support in JavaScript Standard Style

Install

This module is for advanced users. You probably want to use standard instead :)

npm install eslint-config-standard-react

Usage

Shareable configs are designed to work with the extends feature of .eslintrc files. You can learn more about Shareable Configs on the official ESLint website.

This Shareable Config adds React and JSX to the baseline JavaScript Standard Style rules provided in eslint-config-standard.

Here's how to install everything you need:

npm install --save-dev babel-eslint eslint-config-standard eslint-config-standard-jsx eslint-config-standard-react eslint-plugin-promise eslint-plugin-import eslint-plugin-node eslint-plugin-react

Then, add this to your .eslintrc file:

{
  "parser": "babel-eslint",
  "extends": ["standard", "standard-jsx", "standard-react"]
}

Note: We omitted the eslint-config- prefix since it is automatically assumed by ESLint.

You can override settings from the shareable config by adding them directly into your .eslintrc file.

Looking for something easier than this?

The easiest way to use JavaScript Standard Style to check your code is to use the standard package. This comes with a global Node command line program (standard) that you can run or add to your npm test script to quickly check your style.

Badge

Use this in one of your projects? Include one of these badges in your readme to let people know that your code is using the standard style.

js-standard-style

[![js-standard-style](https://cdn.rawgit.com/standard/standard/master/badge.svg)](https://github.com/standard/standard)

js-standard-style

[![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg)](https://github.com/standard/standard)

Learn more

For the full listing of rules, editor plugins, FAQs, and more, visit the main JavaScript Standard Style repo.

Download Details:

Author: Standard
Source Code: https://github.com/standard/eslint-config-standard-react 
License: MIT license

#javascript #eslint #config #react #node 

ESLint Shareable Config for React/JSX Support in JavaScript
Python  Library

Python Library

1661844540

Conda Replicate: Manage Local Mirrored anaconda Channels

What is this?

conda-replicate is a command line tool for creating and updating local mirrored anaconda channels.

Notable features

  • Uses the standard match specification syntax to identify packages
  • Resolves all necessary dependencies of specified packages
  • Allows for channel evolution via direct updates or transferable patches
  • Synchronizes upstream package hotfixes with local channels

Does this violate Anaconda's terms-of-service?

Disclaimer: I am an analyst, not a lawyer. The Anaconda terms-of-service expressly forbids mirroring of the default anaconda repository on repo.anaconda.com. However, as explained in a post on the conda-forge blog, this does not apply to conda-forge or any other channel hosted on anaconda.org. Therefore, conda-replicate uses conda-forge as it's default upstream channel. You are of course welcome to specify another channel, but please be respectful of Anaconda's terms-of-service and do not mirror the default anaconda repository.

Installation

Due to dependencies on modern versions of conda (for searching) and conda-build (for indexing), conda-replicate is currently only available on conda-forge:

conda install conda-replicate --channel conda-forge --override-channels

Usage

  1. Creating a local channel
  2. Updating an existing local channel

1. Creating a local channel

Suppose that you want to mirror all of the conda-forge python 3.9 micro releases (3.9.1, 3.9.2, ...) in a local channel called my-custom-channel, this can be accomplished by simply running the update sub-command:

> conda-replicate update "python >=3.9,<3.10" --target ./my-custom-channel

Because the number of command line arguments can quickly get out of hand, it is recommended that you use a configuration file:

# config.yml
channel: conda-forge
requirements:
  - python >=3.9,<3.10

With this configuration file saved as config.yml one can re-run the above command:

> conda-replicate update --config ./config.yaml --target ./my-custom-channel

Once either of these commands has finished you will have a fully accessible local channel that conda can use to install packages. For example you can do the following:

> conda create -n conda-replicate-test --channel ./my-custom-channel --override-channels -y

> conda activate conda-replicate-test

> python --version
Python 3.9.13

> conda list
# packages in environment at /path/to/Anaconda3/envs/conda-local-test-env:
#
# Name                    Version                   Build  Channel
bzip2                     1.0.8                h8ffe710_4    file:///path/to/my-custom-channel
ca-certificates           2022.6.15            h5b45459_0    file:///path/to/my-custom-channel
libffi                    3.4.2                h8ffe710_5    file:///path/to/my-custom-channel
libsqlite                 3.39.2               h8ffe710_1    file:///path/to/my-custom-channel
libzlib                   1.2.12               h8ffe710_2    file:///path/to/my-custom-channel
openssl                   3.0.5                h8ffe710_1    file:///path/to/my-custom-channel
pip                       22.2.2             pyhd8ed1ab_0    file:///path/to/my-custom-channel
python                    3.9.13       hcf16a7b_0_cpython    file:///path/to/my-custom-channel
python_abi                3.9                      2_cp39    file:///path/to/my-custom-channel
setuptools                65..0           py39hcbf5309_0     file:///path/to/my-custom-channel
sqlite                    3.39.2               h8ffe710_1    file:///path/to/my-custom-channel
tk                        8.6.12               h8ffe710_0    file:///path/to/my-custom-channel
tzdata                    2022c                h191b570_0    file:///path/to/my-custom-channel
ucrt                      10.0.20348.0         h57928b3_0    file:///path/to/my-custom-channel
vc                        14.2                 hb210afc_6    file:///path/to/my-custom-channel
vs2015_runtime            14.29.30037          h902a5da_6    file:///path/to/my-custom-channel
wheel                     0.37.1             pyhd8ed1ab_0    file:///path/to/my-custom-channel
xz                        5.2.6                h8d14728_0    file:///path/to/my-custom-channel

Notice that it appears our local channel has all of the direct and transitive dependencies for python 3.9.13. In fact, it has the direct and transitive dependencies for all of the micro versions of python 3.9. We can see a summary of these dependencies by using the query sub-command, which will query conda-forge and determine what packages are needed to satisfy the python 3.9 specification.

> conda-replicate query --config ./config.yaml

  Packages to add    (26)   Number   Size [MB]
 ──────────────────────────────────────────────
  python                    35          659.94
  pypy3.9                   16          469.51
  openssl                   19          147.41
  setuptools                111         141.61
  pip                       38           48.08
  vs2015_runtime            15           32.38
  sqlite                    20           24.89
  tk                        3            11.67
  ca-certificates           30            5.73
  zlib                      17            2.64
  certifi                   15            2.28
  tzdata                    15            1.91
  pyparsing                 24            1.58
  xz                        3             1.35
  ucrt                      1             1.23
  bzip2                     5             0.76
  expat                     2             0.74
  libsqlite                 1             0.65
  libzlib                   6             0.40
  packaging                 8             0.28
  wheel                     9             0.27
  libffi                    6             0.25
  vc                        14            0.17
  wincertstore              6             0.09
  six                       3             0.04
  python_abi                3             0.01
  Total                     425        1555.88

Note that the query sub-command is most commonly used when a target is included in the configuration file (or on the command line via --target). When a target is specified, the query sub-command will calculate results relative the given target channel. This also applies to other conda-replicate sub-commands such as update and patch. We will make use of this when we update our local channel below, but for now, we want the examine the complete, non-relative, results of query.

As you can see the original update installed quite a few packages, and they take up quite a bit of space! This result may prompt a few questions.

How are dependencies determined?

conda-replicate uses the conda.api to recursively examine the dependencies of user-supplied "root" specifications (like python>=3.9,<3.10 given above) and constructs a directed dependency graph. After this graph is completed, unsatisfied nodes (specifications that have no connected packages) are pruned. Additionally, nodes that have no possible connecting path to at least one of the root specifications are pruned as well. What is left are packages that satisfy either a root specification, a direct dependency of a root specification, or a transitive dependency further down the graph. Note that if a root specification is unsatisfied an UnsatisfiedRequirementsError exception is raised.

As a quick aside, you can use the conda query --info command to look at the dependencies of individual conda packages (where ⋮ indicates hidden output):

> conda search python==3.9.13 --info --channel conda-forge  --override-channels

⋮
python 3.9.13 hcf16a7b_0_cpython
--------------------------------
⋮
dependencies:
  - bzip2 >=1.0.8,<2.0a0
  - libffi >=3.4.2,<3.5.0a0
  - libzlib >=1.2.11,<1.3.0a0
  - openssl >=3.0.3,<4.0a0
  - sqlite >=3.38.5,<4.0a0
  - tk >=8.6.12,<8.7.0a0
  - tzdata
  - vc >=14.1,<15
  - vs2015_runtime >=14.16.27033
  - xz >=5.2.5,<5.3.0a0
  - pip

> conda search pip==22.2.2 -c conda-forge  --override-channels --info

⋮
pip 22.2.2 pyhd8ed1ab_0
-----------------------
⋮
dependencies:
  - python >=3.7
  - setuptools
  - wheel

Why are there so many "extra" packages?

Predominantly, board specifications are the usual the culprit for "extra" packages. Specifically, lets look at the following:

  • 35 different packages of python. This can be traced back to our root specification of python>=3.9,<3.10. This specification includes not only all of the micro versions, but all of the conda-forge builds for those packages as well.
  • 111 packages of setuptools and 38 of pip. Python has a dependency on pip which in turn has a dependency on setuptools (both seen in the aside above). These specifications do not include version numbers and therefore match all packages of setuptools and pip.
  • 16 packages of pypy3.9. In this case some packages depend on the python_abi 3.9-2. There is special build of this package that depends on the pypy3.8 interpreter. Therefore, the relevant pypy packages (and their dependencies) are included in our local channel.

Can we exclude these "extra" packages?

Yes, by using exclusions in the configuration file (or --exclude on the command line option) . Let's assume that you are repeating the process of creating my-custom-channel from above. However instead of jumping right to the update sub-command you do the following:

  1. Run the query sub-command in summary mode (the default mode used above) to see the overall package distribution
> conda-replicate query --config ./config.yaml

2.   If we find some unexpected packages we can re-run query in list mode to zero in on the individual version of those packages. As you can see below there is a wide range of package versions for python, pip, setuptools, pyp3.9.

> python -m conda_local query --config ./config.yaml --output list

Packages to add:
⋮
pip-20.0.2-py_2.tar.bz2
pip-20.1-pyh9f0ad1d_0.tar.bz2
pip-20.1.1-py_1.tar.bz2
⋮
pip-22.2-pyhd8ed1ab_0.tar.bz2
pip-22.2.1-pyhd8ed1ab_0.tar.bz2
pip-22.2.2-pyhd8ed1ab_0.tar.bz2
⋮
pypy3.9-7.3.8-h1738a25_0.tar.bz2
pypy3.9-7.3.8-h1738a25_1.tar.bz2
pypy3.9-7.3.8-h1738a25_2.tar.bz2
pypy3.9-7.3.8-hc3b0203_0.tar.bz2
pypy3.9-7.3.8-hc3b0203_1.tar.bz2
pypy3.9-7.3.8-hc3b0203_2.tar.bz2
pypy3.9-7.3.9-h1738a25_0.tar.bz2
pypy3.9-7.3.9-h1738a25_1.tar.bz2
pypy3.9-7.3.9-h1738a25_2.tar.bz2
pypy3.9-7.3.9-h1738a25_3.tar.bz2
pypy3.9-7.3.9-h1738a25_4.tar.bz2
pypy3.9-7.3.9-hc3b0203_0.tar.bz2
pypy3.9-7.3.9-hc3b0203_1.tar.bz2
pypy3.9-7.3.9-hc3b0203_2.tar.bz2
pypy3.9-7.3.9-hc3b0203_3.tar.bz2
pypy3.9-7.3.9-hc3b0203_4.tar.bz2
⋮
python-3.9.0-h408a966_4_cpython.tar.bz2
python-3.9.1-h7840368_0_cpython.tar.bz2
⋮
python-3.9.12-hcf16a7b_1_cpython.tar.bz2
python-3.9.13-h9a09f29_0_cpython.tar.bz2
⋮
setuptools-49.6.0-py39h467e6f4_2.tar.bz2
setuptools-49.6.0-py39hcbf5309_3.tar.bz2
setuptools-57.4.0-py39h0d475fb_1.tar.bz2
⋮
setuptools-65.1.1-py39hcbf5309_0.tar.bz2
setuptools-65.2.0-py39h0d475fb_0.tar.bz2
setuptools-65.3.0-py39hcbf5309_0.tar.bz2
⋮

3.   Having identified the version ranges of theses packages we can refine our call to the update sub-command by tightening our root specification and making use of exclusions in the configuration file. The entire process is updatable, so we don't need to loss sleep over our ranges right now:

# config.yml
channel: conda-forge
requirements:
  - python >=3.9.8,<3.10  # updated line
exclusions:
  - setuptools <=60.0     # new line
  - pip <=21.0            # new line
  - pypy3.9               # new line
 > conda-replicate query --config ./config.yml

 Packages to add    (21)   Number   Size [MB]
 ──────────────────────────────────────────────
 python                    14          258.60
 openssl                   14          117.20
 setuptools                53           69.28
 vs2015_runtime            15           32.38
 pip                       22           30.10
 sqlite                    10           12.51
 tk                        3            11.67
 ca-certificates           30            5.73
 tzdata                    15            1.91
 pyparsing                 24            1.58
 xz                        3             1.35
 ucrt                      1             1.23
 bzip2                     5             0.76
 libsqlite                 1             0.65
 libzlib                   6             0.40
 packaging                 8             0.28
 wheel                     9             0.27
 libffi                    6             0.25
 vc                        14            0.17
 six                       3             0.04
 python_abi                2             0.01
 Total                     230         546.38

This brings the number of packages and overall size down to a more reasonable level.

4.   Finally we can re-run the update sub-command (as we did above):

conda-replicate update --config ./config.yaml --target ./my-custom-channel

It should be mentioned that sometimes the reasons for why a package was included require a more detailed dependency investigation. In those cases calls to conda search --info, conda-replicate query --output json, or as a last resort conda-replicate query --debug, are very useful.

2. Updating an existing local channel

Once a local channel has been created it can be updated at any time. Updates preform the following actions:

Add, delete, or revoke packages in response to changes in our specifications or the upstream channel

Synchronize local package hotfixes with those in the upstream channel

Refresh the package index in response to package and/or hotfix changes (via conda_build.api)

There are two ways that local channels can be updated, either directly or through patches. Lets examine both options starting from my-custom-channel in the previous section. We ended up with a configuration file that looked like the following:

# config.yml
channel: conda-forge
requirements:
  - python >=3.9.8,<3.10
exclusions:
  - setuptools <=60.0
  - pip <=21.0
  - pypy3.9

Now, let's assume that after creating my-custom-channel we want to further tighten our python and setuptools requirements, and add a new requirement for pydantic (note that this example will only simulate changes to our specifications, not changes to upstream channel). We would also like to include the target field in the configuration file:

#config.yml
channel: conda-forge
target: ./my-custom-channel   # new line
requirements:
  - python >=3.9.10,<3.10     # updated line
  - pydantic                  # new line
exclusions:
  - setuptools <=62.0         # updated line
  - pip <=21.0
  - pypy3.9

Remembering the lessons from the last section, we first run the query sub-command. Because our new configuration file defines a target, we we will see the results of the query relative my-custom-channel. This effectively describes what packages will be added and removed when we run the update:

> conda-replicate query --config ./config.yml

  Packages to add    (4)   Number   Size [MB]
 ─────────────────────────────────────────────
  pydantic                 17           11.87
  typing_extensions        10            0.28
  typing-extensions        10            0.08
  dataclasses              3             0.02
  Total                    40           12.25


  Packages to remove (2)   Number   Size [MB]
 ─────────────────────────────────────────────
  python                   2            36.44
  setuptools               28           33.54
  Total                    30           69.98

How do we perform a direct update?

At this point performing the direct update is simple:

> conda-replicate update --config ./config.yml

How do we perform a patched update?

Patched updates are accomplished by using two different sub-commands: patch and merge. The first of these, patch, works similar to update, in that it will calculate the packages to add or remove relative our target. It will then download the packages and hotfixes into a separate patch directory (controlled by the --parent and --name command line options). It is important to note that the package index of the patch directory is not updated and therefore cannot be used by conda to install packages!

> conda-replicate patch --config ./config.yml --parent ./patch-archive --name patch_20220824

This patch directory can then be merged into an existing channel using the merge sub-command. The merging process not only copies the packages and modified hotfixes form the patch directory, but it also updates the package index. Note that those packages patch determined should be removed are passed to merge via the hotfixes.

> conda-replicate merge ./patch-archive/patch_20220824 my-air-gapped-channel

The patch and merge commands are particularly well suited for updating air-gapped systems, however there are some things to consider:

  1. You must be able to transfer files to the air-gapped system from a network facing system (via a bridge or manual drive).
  2. You need to maintain a parallel channel on your network facing system that is used generate the patches.
  3. The very first transfer to the air-gapped system must be an indexed conda channel. This means that you need to use the update sub-command to create a channel on your network facing system and then transfer the entire channel to the air gapped system. All subsequent transfers can be updated via the patch and merge sub-commands.
  4. Your configuration needs to include conda-replicate as a requirement. If not, you will not be able install conda-replicate on the air-gapped system, which means you cannot run the merge sub-command.

Commands

The following commands are available in conda-replicate:

query

 Usage: conda-replicate query [OPTIONS] [REQUIREMENTS]...

 Search an upstream channel for the specified package REQUIREMENTS and report results.

  • Resulting packages are reported to the user in the specified output form (see --output)
  • Include both direct and transitive dependencies of required packages

 Package requirement notes:

  • Requirements are constructed using the anaconda package query syntax
  • Unsatisfied requirements will raise an error by default (see --no-validate)
  • Requirements specified on the command line augment those specified in a configuration file

┌─ Options ────────────────────────────────────────────────────────────────────────────────────────┐
│                                                                                                  │
│  --channel   -c   TEXT     Upstream anaconda channel. Can be specified using the canonical       │
│                            channel name on anaconda.org (conda-forge), a fully qualified URL     │
│                            (https://conda.anaconda.org/conda-forge/), or a local directory       │
│                            path.                                                                 │
│                            [default: conda-forge]                                                │
│                                                                                                  │
│  --target    -t   TEXT     Target anaconda channel. When specified, this channel will act as a   │
│                            baseline for the package search process - only package differences    │
│                            (additions or deletions) will be reported to the user.                │
│                                                                                                  │
│  --exclude        TEXT     Packages excluded from the search process. Specified using the        │
│                            anaconda package query syntax. Multiple options may be passed at one  │
│                            time.                                                                 │
│                                                                                                  │
│  --dispose        TEXT     Packages that are used in the search process but not included in the  │
│                            final results. Specified using the anaconda package query syntax.     │
│                            Multiple options may be passed at one time.                           │
│                                                                                                  │
│  --subdir         SUBDIR   Selected platform sub-directories. Multiple options may be passed at  │
│                            one time. Allowed values: {linux-32, linux-64, linux-aarch64,         │
│                            linux-armv6l, linux-armv7l, linux-ppc64, linux-ppc64le, linux-s390x,  │
│                            noarch, osx-64, osx-arm64, win-32, win-64, zos-z}.                    │
│                            [default: win-64, noarch]                                             │
│                                                                                                  │
│  --output         OUTPUT   Specifies the format of the search results. Allowed values: {table,   │
│                            list, json}.                                                          │
│                                                                                                  │
│  --config         FILE     Path to the yaml configuration file.                                  │
│                                                                                                  │
│  --quiet                   Quite mode. suppress all superfluous output.                          │
│                                                                                                  │
│  --debug     -d            Enable debugging logs. Can be repeated to increase log level          │
│                                                                                                  │
│  --help                    Show this message and exit.                                           │
│                                                                                                  │
└──────────────────────────────────────────────────────────────────────────────────────────────────┘

update

 Usage: conda-replicate update [OPTIONS] [REQUIREMENTS]...

 Update a local channel based on specified upstream package REQUIREMENTS.

  • Packages are downloaded or removed from the local channel prior to re-indexing
  • Includes both direct and transitive dependencies of required packages
  • Includes update to the platform specific patch instructions (hotfixes)

 Package requirement notes:

  • Requirements are constructed using the anaconda package query syntax
  • Unsatisfied requirements will raise an error by default (see --no-validate)
  • Requirements specified on the command line augment those specified in a configuration file

┌─ Options ────────────────────────────────────────────────────────────────────────────────────────┐
│                                                                                                  │
│  *   --target    -t   TEXT     Local anaconda channel where the update will occur. If this       │
│                                local channel already exists it will act as a baseline for the    │
│                                package search process - only package differences (additions or   │
│                                deletions) will be updated.                                       │
│                                [required]                                                        │
│                                                                                                  │
│      --channel   -c   TEXT     Upstream anaconda channel. Can be specified using the canonical   │
│                                channel name on anaconda.org (conda-forge), a fully qualified     │
│                                URL (https://conda.anaconda.org/conda-forge/), or a local         │
│                                directory path.                                                   │
│                                [default: conda-forge]                                            │
│                                                                                                  │
│      --exclude        TEXT     Packages excluded from the search process. Specified using the    │
│                                anaconda package query syntax. Multiple options may be passed at  │
│                                one time.                                                         │
│                                                                                                  │
│      --dispose        TEXT     Packages that are used in the search process but not included in  │
│                                the final results. Specified using the anaconda package query     │
│                                syntax. Multiple options may be passed at one time.               │
│                                                                                                  │
│      --subdir         SUBDIR   Selected platform sub-directories. Multiple options may be        │
│                                passed at one time. Allowed values: {linux-32, linux-64,          │
│                                linux-aarch64, linux-armv6l, linux-armv7l, linux-ppc64,           │
│                                linux-ppc64le, linux-s390x, noarch, osx-64, osx-arm64, win-32,    │
│                                win-64, zos-z}.                                                   │
│                                [default: win-64, noarch]                                         │
│                                                                                                  │
│      --config         FILE     Path to the yaml configuration file.                              │
│                                                                                                  │
│      --quiet                   Quite mode. suppress all superfluous output.                      │
│                                                                                                  │
│      --debug     -d            Enable debugging logs. Can be repeated to increase log level      │
│                                                                                                  │
│      --help                    Show this message and exit.                                       │
│                                                                                                  │
└──────────────────────────────────────────────────────────────────────────────────────────────────┘

patch

 Usage: conda-replicate patch [OPTIONS] [REQUIREMENTS]...

 Create a patch from an upstream channel based on specified package REQUIREMENTS.

  • Packages are downloaded to a local patch directory (see --name and --parent)
  • Patches can be merged into existing local channels (see merge sub-command)
  • Includes both direct and transitive dependencies of required packages
  • Includes update to the platform specific patch instructions (hotfixes)

 Package requirement notes:

  • Requirements are constructed using the anaconda package query syntax
  • Unsatisfied requirements will raise an error by default (see --no-validate)
  • Requirements specified on the command line augment those specified in a configuration file

┌─ Options ────────────────────────────────────────────────────────────────────────────────────────┐
│                                                                                                  │
│  --target    -t   TEXT     Target anaconda channel. When specified, this channel will act as a   │
│                            baseline for the package search process - only package differences    │
│                            (additions or deletions) will be included in the patch.               │
│                                                                                                  │
│  --name           TEXT     Name of the patch directory. [patch_%Y%m%d_%H%M%S]                    │
│                                                                                                  │
│  --parent         PATH     Parent directory of the patch. [current directory]                    │
│                                                                                                  │
│  --channel   -c   TEXT     Upstream anaconda channel. Can be specified using the canonical       │
│                            channel name on anaconda.org (conda-forge), a fully qualified URL     │
│                            (https://conda.anaconda.org/conda-forge/), or a local directory       │
│                            path.                                                                 │
│                            [default: conda-forge]                                                │
│                                                                                                  │
│  --exclude        TEXT     Packages excluded from the search process. Specified using the        │
│                            anaconda package query syntax. Multiple options may be passed at one  │
│                            time.                                                                 │
│                                                                                                  │
│  --dispose        TEXT     Packages that are used in the search process but not included in the  │
│                            final results. Specified using the anaconda package query syntax.     │
│                            Multiple options may be passed at one time.                           │
│                                                                                                  │
│  --subdir         SUBDIR   Selected platform sub-directories. Multiple options may be passed at  │
│                            one time. Allowed values: {linux-32, linux-64, linux-aarch64,         │
│                            linux-armv6l, linux-armv7l, linux-ppc64, linux-ppc64le, linux-s390x,  │
│                            noarch, osx-64, osx-arm64, win-32, win-64, zos-z}.                    │
│                            [default: win-64, noarch]                                             │
│                                                                                                  │
│  --config         FILE     Path to the yaml configuration file.                                  │
│                                                                                                  │
│  --quiet                   Quite mode. suppress all superfluous output.                          │
│                                                                                                  │
│  --debug     -d            Enable debugging logs. Can be repeated to increase log level          │
│                                                                                                  │
│  --help                    Show this message and exit.                                           │
│                                                                                                  │
└──────────────────────────────────────────────────────────────────────────────────────────────────┘

merge

 Usage: conda-replicate merge [OPTIONS] PATCH CHANNEL

 Merge a PATCH into a local CHANNEL and update the local package index.

┌─ Options ────────────────────────────────────────────────────────────────────────────────────────┐
│                                                                                                  │
│  --quiet           Quite mode. suppress all superfluous output.                                  │
│                                                                                                  │
│  --debug   -d      Enable debugging logs. Can be repeated to increase log level                  │
│                                                                                                  │
│  --help            Show this message and exit.                                                   │
│                                                                                                  │
└──────────────────────────────────────────────────────────────────────────────────────────────────┘

index

 Usage: conda-replicate index [OPTIONS] CHANNEL

 Update the package index of a local CHANNEL.

┌─ Options ────────────────────────────────────────────────────────────────────────────────────────┐
│                                                                                                  │
│  --quiet           Quite mode. suppress all superfluous output.                                  │
│                                                                                                  │
│  --debug   -d      Enable debugging logs. Can be repeated to increase log level                  │
│                                                                                                  │
│  --help            Show this message and exit.                                                   │
│                                                                                                  │
└──────────────────────────────────────────────────────────────────────────────────────────────────┘

Configuration File

The YAML configuration file can be used to specify any of the following:

  • channel (string)
  • target (string)
  • requirements (list)
  • exclusions (list)
  • disposables (list)
  • subdirs (list)

Download details:

Author: cswartzvi
Source code: https://github.com/cswartzvi/conda-replicate 
License: View license

#python #yaml #anaconda

Conda Replicate: Manage Local Mirrored anaconda Channels
Anissa  Barrows

Anissa Barrows

1659959400

Instagram Ruby: A Ruby Wrapper for The Instagram REST and Search APIs


This project is not actively maintained. Proceed at your own risk!


The Instagram Ruby Gem

A Ruby wrapper for the Instagram REST and Search APIs

Installation

gem install instagram

Instagram REST and Search APIs

Our developer site documents all the Instagram REST and Search APIs.

Blog

The [Developer Blog] features news and important announcements about the Instagram Platform. You will also find tutorials and best practices to help you build great platform integrations. Make sure to subscribe to the RSS feed not to miss out on new posts: http://developers.instagram.com.

Community

The Stack Overflow community is a great place to ask API related questions or if you need help with your code. Make sure to tag your questions with the Instagram tag to get fast answers from other fellow developers and members of the Instagram team.

Does your project or organization use this gem?

Add it to the apps wiki!

Sample Application

require "sinatra"
require "instagram"

enable :sessions

CALLBACK_URL = "http://localhost:4567/oauth/callback"

Instagram.configure do |config|
  config.client_id = "YOUR_CLIENT_ID"
  config.client_secret = "YOUR_CLIENT_SECRET"
  # For secured endpoints only
  #config.client_ips = '<Comma separated list of IPs>'
end

get "/" do
  '<a href="/oauth/connect">Connect with Instagram</a>'
end

get "/oauth/connect" do
  redirect Instagram.authorize_url(:redirect_uri => CALLBACK_URL)
end

get "/oauth/callback" do
  response = Instagram.get_access_token(params[:code], :redirect_uri => CALLBACK_URL)
  session[:access_token] = response.access_token
  redirect "/nav"
end

get "/nav" do
  html =
  """
    <h1>Ruby Instagram Gem Sample Application</h1>
    <ol>
      <li><a href='/user_recent_media'>User Recent Media</a> Calls user_recent_media - Get a list of a user's most recent media</li>
      <li><a href='/user_media_feed'>User Media Feed</a> Calls user_media_feed - Get the currently authenticated user's media feed uses pagination</li>
      <li><a href='/location_recent_media'>Location Recent Media</a> Calls location_recent_media - Get a list of recent media at a given location, in this case, the Instagram office</li>
      <li><a href='/media_search'>Media Search</a> Calls media_search - Get a list of media close to a given latitude and longitude</li>
      <li><a href='/media_popular'>Popular Media</a> Calls media_popular - Get a list of the overall most popular media items</li>
      <li><a href='/user_search'>User Search</a> Calls user_search - Search for users on instagram, by name or username</li>
      <li><a href='/location_search'>Location Search</a> Calls location_search - Search for a location by lat/lng</li>
      <li><a href='/location_search_4square'>Location Search - 4Square</a> Calls location_search - Search for a location by Fousquare ID (v2)</li>
      <li><a href='/tags'>Tags</a>Search for tags, view tag info and get media by tag</li>
      <li><a href='/limits'>View Rate Limit and Remaining API calls</a>View remaining and ratelimit info.</li>
    </ol>
  """
  html
end

get "/user_recent_media" do
  client = Instagram.client(:access_token => session[:access_token])
  user = client.user
  html = "<h1>#{user.username}'s recent media</h1>"
  for media_item in client.user_recent_media
    html << "<div style='float:left;'><img src='#{media_item.images.thumbnail.url}'><br/> <a href='/media_like/#{media_item.id}'>Like</a>  <a href='/media_unlike/#{media_item.id}'>Un-Like</a>  <br/>LikesCount=#{media_item.likes[:count]}</div>"
  end
  html
end

get '/media_like/:id' do
  client = Instagram.client(:access_token => session[:access_token])
  client.like_media("#{params[:id]}")
  redirect "/user_recent_media"
end

get '/media_unlike/:id' do
  client = Instagram.client(:access_token => session[:access_token])
  client.unlike_media("#{params[:id]}")
  redirect "/user_recent_media"
end

get "/user_media_feed" do
  client = Instagram.client(:access_token => session[:access_token])
  user = client.user
  html = "<h1>#{user.username}'s media feed</h1>"
  
  page_1 = client.user_media_feed(777)
  page_2_max_id = page_1.pagination.next_max_id
  page_2 = client.user_recent_media(777, :max_id => page_2_max_id ) unless page_2_max_id.nil?
  html << "<h2>Page 1</h2><br/>"
  for media_item in page_1
    html << "<img src='#{media_item.images.thumbnail.url}'>"
  end
  html << "<h2>Page 2</h2><br/>"
  for media_item in page_2
    html << "<img src='#{media_item.images.thumbnail.url}'>"
  end
  html
end

get "/location_recent_media" do
  client = Instagram.client(:access_token => session[:access_token])
  html = "<h1>Media from the Instagram Office</h1>"
  for media_item in client.location_recent_media(514276)
    html << "<img src='#{media_item.images.thumbnail.url}'>"
  end
  html
end

get "/media_search" do
  client = Instagram.client(:access_token => session[:access_token])
  html = "<h1>Get a list of media close to a given latitude and longitude</h1>"
  for media_item in client.media_search("37.7808851","-122.3948632")
    html << "<img src='#{media_item.images.thumbnail.url}'>"
  end
  html
end

get "/media_popular" do
  client = Instagram.client(:access_token => session[:access_token])
  html = "<h1>Get a list of the overall most popular media items</h1>"
  for media_item in client.media_popular
    html << "<img src='#{media_item.images.thumbnail.url}'>"
  end
  html
end

get "/user_search" do
  client = Instagram.client(:access_token => session[:access_token])
  html = "<h1>Search for users on instagram, by name or usernames</h1>"
  for user in client.user_search("instagram")
    html << "<li> <img src='#{user.profile_picture}'> #{user.username} #{user.full_name}</li>"
  end
  html
end

get "/location_search" do
  client = Instagram.client(:access_token => session[:access_token])
  html = "<h1>Search for a location by lat/lng with a radius of 5000m</h1>"
  for location in client.location_search("48.858844","2.294351","5000")
    html << "<li> #{location.name} <a href='https://www.google.com/maps/preview/@#{location.latitude},#{location.longitude},19z'>Map</a></li>"
  end
  html
end

get "/location_search_4square" do
  client = Instagram.client(:access_token => session[:access_token])
  html = "<h1>Search for a location by Fousquare ID (v2)</h1>"
  for location in client.location_search("3fd66200f964a520c5f11ee3")
    html << "<li> #{location.name} <a href='https://www.google.com/maps/preview/@#{location.latitude},#{location.longitude},19z'>Map</a></li>"
  end
  html
end

get "/tags" do
  client = Instagram.client(:access_token => session[:access_token])
  html = "<h1>Search for tags, get tag info and get media by tag</h1>"
  tags = client.tag_search('cat')
  html << "<h2>Tag Name = #{tags[0].name}. Media Count =  #{tags[0].media_count}. </h2><br/><br/>"
  for media_item in client.tag_recent_media(tags[0].name)
    html << "<img src='#{media_item.images.thumbnail.url}'>"
  end
  html
end

get "/limits" do
  client = Instagram.client(:access_token => session[:access_token])
  html = "<h1/>View API Rate Limit and calls remaining</h1>"
  response = client.utils_raw_response
  html << "Rate Limit = #{response.headers[:x_ratelimit_limit]}.  <br/>Calls Remaining = #{response.headers[:x_ratelimit_remaining]}"

  html
end

Contributing

In the spirit of free software, everyone is encouraged to help improve this project.

Here are some ways you can contribute:

  • by using alpha, beta, and prerelease versions
  • by reporting bugs
  • by suggesting new features
  • by writing or editing documentation
  • by writing specifications
  • by writing code (no patch is too small: fix typos, add comments, clean up inconsistent whitespace)
  • by refactoring code
  • by closing issues
  • by reviewing patches

Submitting an Issue

We use the GitHub issue tracker to track bugs and features. Before submitting a bug report or feature request, check to make sure it hasn't already been submitted. You can indicate support for an existing issue by voting it up. When submitting a bug report, please include a Gist that includes a stack trace and any details that may be necessary to reproduce the bug, including your gem version, Ruby version, and operating system. Ideally, a bug report should include a pull request with failing specs.

Submitting a Pull Request

  1. Fork the project.
  2. Create a topic branch.
  3. Implement your feature or bug fix.
  4. Add documentation for your feature or bug fix.
  5. Run rake doc:yard. If your changes are not 100% documented, go back to step 4.
  6. Add specs for your feature or bug fix.
  7. Run rake spec. If your changes are not 100% covered, go back to step 6.
  8. Commit and push your changes.
  9. Submit a pull request. Please do not include changes to the gemspec, version, or history file. (If you want to create your own version for some reason, please do so in a separate commit.)
  10. If you haven't already, complete the Contributor License Agreement ("CLA").

Contributor License Agreement ("CLA")


In order to accept your pull request, we need you to submit a CLA. You only need to do this once to work on any of Instagram's or Facebook's open source projects.

Complete your CLA here: https://code.facebook.com/cla

Copyright

Copyright (c) 2014, Facebook, Inc. All rights reserved. By contributing to Instagram Ruby Gem, you agree that your contributions will be licensed under its BSD license. See LICENSE for details.


Author: facebookarchive
Source code: https://github.com/facebookarchive/instagram-ruby-gem
License: View license

#ruby 

Instagram Ruby: A Ruby Wrapper for The Instagram REST and Search APIs

Chewy: An ODM (Object Document Mapper) on Ruby

Chewy

Chewy is an ODM (Object Document Mapper), built on top of the the official Elasticsearch client.

Why Chewy?

In this section we'll cover why you might want to use Chewy instead of the official elasticsearch-ruby client gem.

Every index is observable by all the related models.

Most of the indexed models are related to other and sometimes it is necessary to denormalize this related data and put at the same object. For example, you need to index an array of tags together with an article. Chewy allows you to specify an updateable index for every model separately - so corresponding articles will be reindexed on any tag update.

Bulk import everywhere.

Chewy utilizes the bulk ES API for full reindexing or index updates. It also uses atomic updates. All the changed objects are collected inside the atomic block and the index is updated once at the end with all the collected objects. See Chewy.strategy(:atomic) for more details.

Powerful querying DSL.

Chewy has an ActiveRecord-style query DSL. It is chainable, mergeable and lazy, so you can produce queries in the most efficient way. It also has object-oriented query and filter builders.

Support for ActiveRecord.

Installation

Add this line to your application's Gemfile:

gem 'chewy'

And then execute:

$ bundle

Or install it yourself as:

$ gem install chewy

Compatibility

Ruby

Chewy is compatible with MRI 2.6-3.0¹.

¹ Ruby 3 is only supported with Rails 6.1

Elasticsearch compatibility matrix

Chewy versionElasticsearch version
7.2.x7.x
7.1.x7.x
7.0.x6.8, 7.x
6.0.05.x, 6.x
5.x5.x, limited support for 1.x & 2.x

Important: Chewy doesn't follow SemVer, so you should always check the release notes before upgrading. The major version is linked to the newest supported Elasticsearch and the minor version bumps may include breaking changes.

See our migration guide for detailed upgrade instructions between various Chewy versions.

Active Record

5.2, 6.0, 6.1 Active Record versions are supported by all Chewy versions.

Getting Started

Chewy provides functionality for Elasticsearch index handling, documents import mappings, index update strategies and chainable query DSL.

Minimal client setting

Create config/initializers/chewy.rb with this line:

Chewy.settings = {host: 'localhost:9250'}

And run rails g chewy:install to generate chewy.yml:

# config/chewy.yml
# separate environment configs
test:
  host: 'localhost:9250'
  prefix: 'test'
development:
  host: 'localhost:9200'

Elasticsearch

Make sure you have Elasticsearch up and running. You can install it locally, but the easiest way is to use Docker:

$ docker run --rm --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.11.1

Index

Create app/chewy/users_index.rb with User Index:

class UsersIndex < Chewy::Index
  settings analysis: {
    analyzer: {
      email: {
        tokenizer: 'keyword',
        filter: ['lowercase']
      }
    }
  }

  index_scope User
  field :first_name
  field :last_name
  field :email, analyzer: 'email'
end

Model

Add User model, table and migrate it:

$ bundle exec rails g model User first_name last_name email
$ bundle exec rails db:migrate

Add update_index to app/models/user.rb:

class User < ApplicationRecord
  update_index('users') { self }
end

Example of data request

  1. Once a record is created (could be done via the Rails console), it creates User index too:
User.create(
  first_name: "test1",
  last_name: "test1",
  email: 'test1@example.com',
  # other fields
)
# UsersIndex Import (355.3ms) {:index=>1}
# => #<User id: 1, first_name: "test1", last_name: "test1", email: "test1@example.com", # other fields>
  1. A query could be exposed at a given UsersController:
def search
  @users = UsersIndex.query(query_string: { fields: [:first_name, :last_name, :email, ...], query: search_params[:query], default_operator: 'and' })
  render json: @users.to_json, status: :ok
end

private

def search_params
  params.permit(:query, :page, :per)
end
  1. So a request against http://localhost:3000/users/search?query=test1@example.com issuing a response like:
[
  {
    "attributes":{
      "id":"1",
      "first_name":"test1",
      "last_name":"test1",
      "email":"test1@example.com",
      ...
      "_score":0.9808291,
      "_explanation":null
    },
    "_data":{
      "_index":"users",
      "_type":"_doc",
      "_id":"1",
      "_score":0.9808291,
      "_source":{
        "first_name":"test1",
        "last_name":"test1",
        "email":"test1@example.com",
        ...
      }
    }
  }
]

Usage and configuration

Client settings

To configure the Chewy client you need to add chewy.rb file with Chewy.settings hash:

# config/initializers/chewy.rb
Chewy.settings = {host: 'localhost:9250'} # do not use environments

And add chewy.yml configuration file.

You can create chewy.yml manually or run rails g chewy:install to generate it:

# config/chewy.yml
# separate environment configs
test:
  host: 'localhost:9250'
  prefix: 'test'
development:
  host: 'localhost:9200'

The resulting config merges both hashes. Client options are passed as is to Elasticsearch::Transport::Client except for the :prefix, which is used internally by Chewy to create prefixed index names:

  Chewy.settings = {prefix: 'test'}
  UsersIndex.index_name # => 'test_users'

The logger may be set explicitly:

Chewy.logger = Logger.new(STDOUT)

See config.rb for more details.

AWS Elasticsearch

If you would like to use AWS's Elasticsearch using an IAM user policy, you will need to sign your requests for the es:* action by injecting the appropriate headers passing a proc to transport_options. You'll need an additional gem for Faraday middleware: add gem 'faraday_middleware-aws-sigv4' to your Gemfile.

require 'faraday_middleware/aws_sigv4'

Chewy.settings = {
  host: 'http://my-es-instance-on-aws.us-east-1.es.amazonaws.com:80',
  port: 80, # 443 for https host
  transport_options: {
    headers: { content_type: 'application/json' },
    proc: -> (f) do
        f.request :aws_sigv4,
                  service: 'es',
                  region: 'us-east-1',
                  access_key_id: ENV['AWS_ACCESS_KEY'],
                  secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
    end
  }
}

Index definition

  1. Create /app/chewy/users_index.rb
class UsersIndex < Chewy::Index

end
  1. Define index scope (you can omit this part if you don't need to specify a scope (i.e. use PORO objects for import) or options)
class UsersIndex < Chewy::Index
  index_scope User.active # or just model instead_of scope: index_scope User
end
  1. Add some mappings
class UsersIndex < Chewy::Index
  index_scope User.active.includes(:country, :badges, :projects)
  field :first_name, :last_name # multiple fields without additional options
  field :email, analyzer: 'email' # Elasticsearch-related options
  field :country, value: ->(user) { user.country.name } # custom value proc
  field :badges, value: ->(user) { user.badges.map(&:name) } # passing array values to index
  field :projects do # the same block syntax for multi_field, if `:type` is specified
    field :title
    field :description # default data type is `text`
    # additional top-level objects passed to value proc:
    field :categories, value: ->(project, user) { project.categories.map(&:name) if user.active? }
  end
  field :rating, type: 'integer' # custom data type
  field :created, type: 'date', include_in_all: false,
    value: ->{ created_at } # value proc for source object context
end

See here for mapping definitions.

  1. Add some index-related settings. Analyzer repositories might be used as well. See Chewy::Index.settings docs for details:
class UsersIndex < Chewy::Index
  settings analysis: {
    analyzer: {
      email: {
        tokenizer: 'keyword',
        filter: ['lowercase']
      }
    }
  }

  index_scope User.active.includes(:country, :badges, :projects)
  root date_detection: false do
    template 'about_translations.*', type: 'text', analyzer: 'standard'

    field :first_name, :last_name
    field :email, analyzer: 'email'
    field :country, value: ->(user) { user.country.name }
    field :badges, value: ->(user) { user.badges.map(&:name) }
    field :projects do
      field :title
      field :description
    end
    field :about_translations, type: 'object' # pass object type explicitly if necessary
    field :rating, type: 'integer'
    field :created, type: 'date', include_in_all: false,
      value: ->{ created_at }
  end
end

See index settings here. See root object settings here.

See mapping.rb for more details.

  1. Add model-observing code
class User < ActiveRecord::Base
  update_index('users') { self } # specifying index and back-reference
                                      # for updating after user save or destroy
end

class Country < ActiveRecord::Base
  has_many :users

  update_index('users') { users } # return single object or collection
end

class Project < ActiveRecord::Base
  update_index('users') { user if user.active? } # you can return even `nil` from the back-reference
end

class Book < ActiveRecord::Base
  update_index(->(book) {"books_#{book.language}"}) { self } # dynamic index name with proc.
                                                             # For book with language == "en"
                                                             # this code will generate `books_en`
end

Also, you can use the second argument for method name passing:

update_index('users', :self)
update_index('users', :users)

In the case of a belongs_to association you may need to update both associated objects, previous and current:

class City < ActiveRecord::Base
  belongs_to :country

  update_index('cities') { self }
  update_index 'countries' do
    previous_changes['country_id'] || country
  end
end

Default import options

Every index has default_import_options configuration to specify, suddenly, default import options:

class ProductsIndex < Chewy::Index
  index_scope Post.includes(:tags)
  default_import_options batch_size: 100, bulk_size: 10.megabytes, refresh: false

  field :name
  field :tags, value: -> { tags.map(&:name) }
end

See import.rb for available options.

Multi (nested) and object field types

To define an objects field you can simply nest fields in the DSL:

field :projects do
  field :title
  field :description
end

This will automatically set the type or root field to object. You may also specify type: 'objects' explicitly.

To define a multi field you have to specify any type except for object or nested in the root field:

field :full_name, type: 'text', value: ->{ full_name.strip } do
  field :ordered, analyzer: 'ordered'
  field :untouched, type: 'keyword'
end

The value: option for internal fields will no longer be effective.

Geo Point fields

You can use Elasticsearch's geo mapping with the geo_point field type, allowing you to query, filter and order by latitude and longitude. You can use the following hash format:

field :coordinates, type: 'geo_point', value: ->{ {lat: latitude, lon: longitude} }

or by using nested fields:

field :coordinates, type: 'geo_point' do
  field :lat, value: ->{ latitude }
  field :long, value: ->{ longitude }
end

See the section on Script fields for details on calculating distance in a search.

Join fields

You can use a join field to implement parent-child relationships between documents. It replaces the old parent_id based parent-child mapping

To use it, you need to pass relations and join (with type and id) options:

field :hierarchy_link, type: :join, relations: {question: %i[answer comment], answer: :vote, vote: :subvote}, join: {type: :comment_type, id: :commented_id}

assuming you have comment_type and commented_id fields in your model.

Note that when you reindex a parent, it's children and grandchildren will be reindexed as well. This may require additional queries to the primary database and to elastisearch.

Also note that the join field doesn't support crutches (it should be a field directly defined on the model).

Crutches™ technology

Assume you are defining your index like this (product has_many categories through product_categories):

class ProductsIndex < Chewy::Index
  index_scope Product.includes(:categories)
  field :name
  field :category_names, value: ->(product) { product.categories.map(&:name) } # or shorter just -> { categories.map(&:name) }
end

Then the Chewy reindexing flow will look like the following pseudo-code:

Product.includes(:categories).find_in_batches(1000) do |batch|
  bulk_body = batch.map do |object|
    {name: object.name, category_names: object.categories.map(&:name)}.to_json
  end
  # here we are sending every batch of data to ES
  Chewy.client.bulk bulk_body
end

If you meet complicated cases when associations are not applicable you can replace Rails associations with Chewy Crutches™ technology:

class ProductsIndex < Chewy::Index
  index_scope Product
  crutch :categories do |collection| # collection here is a current batch of products
    # data is fetched with a lightweight query without objects initialization
    data = ProductCategory.joins(:category).where(product_id: collection.map(&:id)).pluck(:product_id, 'categories.name')
    # then we have to convert fetched data to appropriate format
    # this will return our data in structure like:
    # {123 => ['sweets', 'juices'], 456 => ['meat']}
    data.each.with_object({}) { |(id, name), result| (result[id] ||= []).push(name) }
  end

  field :name
  # simply use crutch-fetched data as a value:
  field :category_names, value: ->(product, crutches) { crutches.categories[product.id] }
end

An example flow will look like this:

Product.includes(:categories).find_in_batches(1000) do |batch|
  crutches[:categories] = ProductCategory.joins(:category).where(product_id: batch.map(&:id)).pluck(:product_id, 'categories.name')
    .each.with_object({}) { |(id, name), result| (result[id] ||= []).push(name) }

  bulk_body = batch.map do |object|
    {name: object.name, category_names: crutches[:categories][object.id]}.to_json
  end
  Chewy.client.bulk bulk_body
end

So Chewy Crutches™ technology is able to increase your indexing performance in some cases up to a hundredfold or even more depending on your associations complexity.

Witchcraft™ technology

One more experimental technology to increase import performance. As far as you know, chewy defines value proc for every imported field in mapping, so at the import time each of this procs is executed on imported object to extract result document to import. It would be great for performance to use one huge whole-document-returning proc instead. So basically the idea or Witchcraft™ technology is to compile a single document-returning proc from the index definition.

index_scope Product
witchcraft!

field :title
field :tags, value: -> { tags.map(&:name) }
field :categories do
  field :name, value: -> (product, category) { category.name }
  field :type, value: -> (product, category, crutch) { crutch.types[category.name] }
end

The index definition above will be compiled to something close to:

-> (object, crutches) do
  {
    title: object.title,
    tags: object.tags.map(&:name),
    categories: object.categories.map do |object2|
      {
        name: object2.name
        type: crutches.types[object2.name]
      }
    end
  }
end

And don't even ask how is it possible, it is a witchcraft. Obviously not every type of definition might be compiled. There are some restrictions:

  1. Use reasonable formatting to make method_source be able to extract field value proc sources.
  2. Value procs with splat arguments are not supported right now.
  3. If you are generating fields dynamically use value proc with arguments, argumentless value procs are not supported yet:
[:first_name, :last_name].each do |name|
  field name, value: -> (o) { o.send(name) }
end

However, it is quite possible that your index definition will be supported by Witchcraft™ technology out of the box in the most of the cases.

Raw Import

Another way to speed up import time is Raw Imports. This technology is only available in ActiveRecord adapter. Very often, ActiveRecord model instantiation is what consumes most of the CPU and RAM resources. Precious time is wasted on converting, say, timestamps from strings and then serializing them back to strings. Chewy can operate on raw hashes of data directly obtained from the database. All you need is to provide a way to convert that hash to a lightweight object that mimics the behaviour of the normal ActiveRecord object.

class LightweightProduct
  def initialize(attributes)
    @attributes = attributes
  end

  # Depending on the database, `created_at` might
  # be in different formats. In PostgreSQL, for example,
  # you might see the following format:
  #   "2016-03-22 16:23:22"
  #
  # Taking into account that Elastic expects something different,
  # one might do something like the following, just to avoid
  # unnecessary String -> DateTime -> String conversion.
  #
  #   "2016-03-22 16:23:22" -> "2016-03-22T16:23:22Z"
  def created_at
    @attributes['created_at'].tr(' ', 'T') << 'Z'
  end
end

index_scope Product
default_import_options raw_import: ->(hash) {
  LightweightProduct.new(hash)
}

field :created_at, 'datetime'

Also, you can pass :raw_import option to the import method explicitly.

Index creation during import

By default, when you perform import Chewy checks whether an index exists and creates it if it's absent. You can turn off this feature to decrease Elasticsearch hits count. To do so you need to set skip_index_creation_on_import parameter to false in your config/chewy.yml

Skip record fields during import

You can use ignore_blank: true to skip fields that return true for the .blank? method:

index_scope Country
field :id
field :cities, ignore_blank: true do
  field :id
  field :name
  field :surname, ignore_blank: true
  field :description
end

Default values for different types

By default ignore_blank is false on every type except geo_point.

Journaling

You can record all actions that were made to the separate journal index in ElasticSearch. When you create/update/destroy your documents, it will be saved in this special index. If you make something with a batch of documents (e.g. during index reset) it will be saved as a one record, including primary keys of each document that was affected. Common journal record looks like this:

{
  "action": "index",
  "object_id": [1, 2, 3],
  "index_name": "...",
  "created_at": "<timestamp>"
}

This feature is turned off by default. But you can turn it on by setting journal setting to true in config/chewy.yml. Also, you can specify journal index name. For example:

# config/chewy.yml
production:
  journal: true
  journal_name: my_super_journal

Also, you can provide this option while you're importing some index:

CityIndex.import journal: true

Or as a default import option for an index:

class CityIndex
  index_scope City
  default_import_options journal: true
end

You may be wondering why do you need it? The answer is simple: not to lose the data.

Imagine that you reset your index in a zero-downtime manner (to separate index), and at the meantime somebody keeps updating the data frequently (to old index). So all these actions will be written to the journal index and you'll be able to apply them after index reset using the Chewy::Journal interface.

Index manipulation

UsersIndex.delete # destroy index if it exists
UsersIndex.delete!

UsersIndex.create
UsersIndex.create! # use bang or non-bang methods

UsersIndex.purge
UsersIndex.purge! # deletes then creates index

UsersIndex.import # import with 0 arguments process all the data specified in index_scope definition
UsersIndex.import User.where('rating > 100') # or import specified users scope
UsersIndex.import User.where('rating > 100').to_a # or import specified users array
UsersIndex.import [1, 2, 42] # pass even ids for import, it will be handled in the most effective way
UsersIndex.import User.where('rating > 100'), update_fields: [:email] # if update fields are specified - it will update their values only with the `update` bulk action
UsersIndex.import! # raises an exception in case of any import errors

UsersIndex.reset! # purges index and imports default data for all types

If the passed user is #destroyed?, or satisfies a delete_if index_scope option, or the specified id does not exist in the database, import will perform delete from index action for this object.

index_scope User, delete_if: :deleted_at
index_scope User, delete_if: -> { deleted_at }
index_scope User, delete_if: ->(user) { user.deleted_at }

See actions.rb for more details.

Index update strategies

Assume you've got the following code:

class City < ActiveRecord::Base
  update_index 'cities', :self
end

class CitiesIndex < Chewy::Index
  index_scope City
  field :name
end

If you do something like City.first.save! you'll get an UndefinedUpdateStrategy exception instead of the object saving and index updating. This exception forces you to choose an appropriate update strategy for the current context.

If you want to return to the pre-0.7.0 behavior - just set Chewy.root_strategy = :bypass.

:atomic

The main strategy here is :atomic. Assume you have to update a lot of records in the db.

Chewy.strategy(:atomic) do
  City.popular.map(&:do_some_update_action!)
end

Using this strategy delays the index update request until the end of the block. Updated records are aggregated and the index update happens with the bulk API. So this strategy is highly optimized.

:sidekiq

This does the same thing as :atomic, but asynchronously using sidekiq. Patch Chewy::Strategy::Sidekiq::Worker for index updates improving.

Chewy.strategy(:sidekiq) do
  City.popular.map(&:do_some_update_action!)
end

The default queue name is chewy, you can customize it in settings: sidekiq.queue_name

Chewy.settings[:sidekiq] = {queue: :low}

:lazy_sidekiq

This does the same thing as :sidekiq, but with lazy evaluation. Beware it does not allow you to use any non-persistent record state for indices and conditions because record will be re-fetched from database asynchronously using sidekiq. However for destroying records strategy will fallback to :sidekiq because it's not possible to re-fetch deleted records from database.

The purpose of this strategy is to improve the response time of the code that should update indexes, as it does not only defer actual ES calls to a background job but update_index callbacks evaluation (for created and updated objects) too. Similar to :sidekiq, index update is asynchronous so this strategy cannot be used when data and index synchronization is required.

Chewy.strategy(:lazy_sidekiq) do
  City.popular.map(&:do_some_update_action!)
end

The default queue name is chewy, you can customize it in settings: sidekiq.queue_name

Chewy.settings[:sidekiq] = {queue: :low}

:active_job

This does the same thing as :atomic, but using ActiveJob. This will inherit the ActiveJob configuration settings including the active_job.queue_adapter setting for the environment. Patch Chewy::Strategy::ActiveJob::Worker for index updates improving.

Chewy.strategy(:active_job) do
  City.popular.map(&:do_some_update_action!)
end

The default queue name is chewy, you can customize it in settings: active_job.queue_name

Chewy.settings[:active_job] = {queue: :low}

:urgent

The following strategy is convenient if you are going to update documents in your index one by one.

Chewy.strategy(:urgent) do
  City.popular.map(&:do_some_update_action!)
end

This code will perform City.popular.count requests for ES documents update.

It is convenient for use in e.g. the Rails console with non-block notation:

> Chewy.strategy(:urgent)
> City.popular.map(&:do_some_update_action!)

:bypass

The bypass strategy simply silences index updates.

Nesting

Strategies are designed to allow nesting, so it is possible to redefine it for nested contexts.

Chewy.strategy(:atomic) do
  city1.do_update!
  Chewy.strategy(:urgent) do
    city2.do_update!
    city3.do_update!
    # there will be 2 update index requests for city2 and city3
  end
  city4..do_update!
  # city1 and city4 will be grouped in one index update request
end

Non-block notation

It is possible to nest strategies without blocks:

Chewy.strategy(:urgent)
city1.do_update! # index updated
Chewy.strategy(:bypass)
city2.do_update! # update bypassed
Chewy.strategy.pop
city3.do_update! # index updated again

Designing your own strategies

See strategy/base.rb for more details. See strategy/atomic.rb for an example.

Rails application strategies integration

There are a couple of predefined strategies for your Rails application. Initially, the Rails console uses the :urgent strategy by default, except in the sandbox case. When you are running sandbox it switches to the :bypass strategy to avoid polluting the index.

Migrations are wrapped with the :bypass strategy. Because the main behavior implies that indices are reset after migration, there is no need for extra index updates. Also indexing might be broken during migrations because of the outdated schema.

Controller actions are wrapped with the configurable value of Chewy.request_strategy and defaults to :atomic. This is done at the middleware level to reduce the number of index update requests inside actions.

It is also a good idea to set up the :bypass strategy inside your test suite and import objects manually only when needed, and use Chewy.massacre when needed to flush test ES indices before every example. This will allow you to minimize unnecessary ES requests and reduce overhead.

RSpec.configure do |config|
  config.before(:suite) do
    Chewy.strategy(:bypass)
  end
end

Elasticsearch client options

All connection options, except the :prefix, are passed to the Elasticseach::Client.new (chewy/lib/chewy.rb):

Here's the relevant Elasticsearch documentation on the subject: https://rubydoc.info/gems/elasticsearch-transport#setting-hosts

ActiveSupport::Notifications support

Chewy has notifying the following events:

search_query.chewy payload

  • payload[:index]: requested index class
  • payload[:request]: request hash

import_objects.chewy payload

payload[:index]: currently imported index name

payload[:import]: imports stats, total imported and deleted objects count:

{index: 30, delete: 5}

payload[:errors]: might not exists. Contains grouped errors with objects ids list:

{index: {
  'error 1 text' => ['1', '2', '3'],
  'error 2 text' => ['4']
}, delete: {
  'delete error text' => ['10', '12']
}}

NewRelic integration

To integrate with NewRelic you may use the following example source (config/initializers/chewy.rb):

require 'new_relic/agent/instrumentation/evented_subscriber'

class ChewySubscriber < NewRelic::Agent::Instrumentation::EventedSubscriber
  def start(name, id, payload)
    event = ChewyEvent.new(name, Time.current, nil, id, payload)
    push_event(event)
  end

  def finish(_name, id, _payload)
    pop_event(id).finish
  end

  class ChewyEvent < NewRelic::Agent::Instrumentation::Event
    OPERATIONS = {
      'import_objects.chewy' => 'import',
      'search_query.chewy' => 'search',
      'delete_query.chewy' => 'delete'
    }.freeze

    def initialize(*args)
      super
      @segment = start_segment
    end

    def start_segment
      segment = NewRelic::Agent::Transaction::DatastoreSegment.new product, operation, collection, host, port
      if (txn = state.current_transaction)
        segment.transaction = txn
      end
      segment.notice_sql @payload[:request].to_s
      segment.start
      segment
    end

    def finish
      if (txn = state.current_transaction)
        txn.add_segment @segment
      end
      @segment.finish
    end

    private

    def state
      @state ||= NewRelic::Agent::TransactionState.tl_get
    end

    def product
      'Elasticsearch'
    end

    def operation
      OPERATIONS[name]
    end

    def collection
      payload.values_at(:type, :index)
             .reject { |value| value.try(:empty?) }
             .first
             .to_s
    end

    def host
      Chewy.client.transport.hosts.first[:host]
    end

    def port
      Chewy.client.transport.hosts.first[:port]
    end
  end
end

ActiveSupport::Notifications.subscribe(/.chewy$/, ChewySubscriber.new)

Search requests

Quick introduction.

Composing requests

The request DSL have the same chainable nature as AR. The main class is Chewy::Search::Request.

CitiesIndex.query(match: {name: 'London'})

Main methods of the request DSL are: query, filter and post_filter, it is possible to pass pure query hashes or use elasticsearch-dsl.

CitiesIndex
  .filter(term: {name: 'Bangkok'})
  .query(match: {name: 'London'})
  .query.not(range: {population: {gt: 1_000_000}})

You can query a set of indexes at once:

CitiesIndex.indices(CountriesIndex).query(match: {name: 'Some'})

See https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html and https://github.com/elastic/elasticsearch-dsl-ruby for more details.

An important part of requests manipulation is merging. There are 4 methods to perform it: merge, and, or, not. See Chewy::Search::QueryProxy for details. Also, only and except methods help to remove unneeded parts of the request.

Every other request part is covered by a bunch of additional methods, see Chewy::Search::Request for details:

CitiesIndex.limit(10).offset(30).order(:name, {population: {order: :desc}})

Request DSL also provides additional scope actions, like delete_all, exists?, count, pluck, etc.

Pagination

The request DSL supports pagination with Kaminari. An extension is enabled on initializtion if Kaminari is available. See Chewy::Search and Chewy::Search::Pagination::Kaminari for details.

Named scopes

Chewy supports named scopes functionality. There is no specialized DSL for named scopes definition, it is simply about defining class methods.

See Chewy::Search::Scoping for details.

Scroll API

ElasticSearch scroll API is utilized by a bunch of methods: scroll_batches, scroll_hits, scroll_wrappers and scroll_objects.

See Chewy::Search::Scrolling for details.

Loading objects

It is possible to load ORM/ODM source objects with the objects method. To provide additional loading options use load method:

CitiesIndex.load(scope: -> { active }).to_a # to_a returns `Chewy::Index` wrappers.
CitiesIndex.load(scope: -> { active }).objects # An array of AR source objects.

See Chewy::Search::Loader for more details.

In case when it is necessary to iterate through both of the wrappers and objects simultaneously, object_hash method helps a lot:

scope = CitiesIndex.load(scope: -> { active })
scope.each do |wrapper|
  scope.object_hash[wrapper]
end

Rake tasks

For a Rails application, some index-maintaining rake tasks are defined.

chewy:reset

Performs zero-downtime reindexing as described here. So the rake task creates a new index with unique suffix and then simply aliases it to the common index name. The previous index is deleted afterwards (see Chewy::Index.reset! for more details).

rake chewy:reset # resets all the existing indices
rake chewy:reset[users] # resets UsersIndex only
rake chewy:reset[users,cities] # resets UsersIndex and CitiesIndex
rake chewy:reset[-users,cities] # resets every index in the application except specified ones

chewy:upgrade

Performs reset exactly the same way as chewy:reset does, but only when the index specification (setting or mapping) was changed.

It works only when index specification is locked in Chewy::Stash::Specification index. The first run will reset all indexes and lock their specifications.

See Chewy::Stash::Specification and Chewy::Index::Specification for more details.

rake chewy:upgrade # upgrades all the existing indices
rake chewy:upgrade[users] # upgrades UsersIndex only
rake chewy:upgrade[users,cities] # upgrades UsersIndex and CitiesIndex
rake chewy:upgrade[-users,cities] # upgrades every index in the application except specified ones

chewy:update

It doesn't create indexes, it simply imports everything to the existing ones and fails if the index was not created before.

rake chewy:update # updates all the existing indices
rake chewy:update[users] # updates UsersIndex only
rake chewy:update[users,cities] # updates UsersIndex and CitiesIndex
rake chewy:update[-users,cities] # updates every index in the application except UsersIndex and CitiesIndex

chewy:sync

Provides a way to synchronize outdated indexes with the source quickly and without doing a full reset. By default field updated_at is used to find outdated records, but this could be customized by outdated_sync_field as described at Chewy::Index::Syncer.

Arguments are similar to the ones taken by chewy:update task.

See Chewy::Index::Syncer for more details.

rake chewy:sync # synchronizes all the existing indices
rake chewy:sync[users] # synchronizes UsersIndex only
rake chewy:sync[users,cities] # synchronizes UsersIndex and CitiesIndex
rake chewy:sync[-users,cities] # synchronizes every index in the application except except UsersIndex and CitiesIndex

chewy:deploy

This rake task is especially useful during the production deploy. It is a combination of chewy:upgrade and chewy:sync and the latter is called only for the indexes that were not reset during the first stage.

It is not possible to specify any particular indexes for this task as it doesn't make much sense.

Right now the approach is that if some data had been updated, but index definition was not changed (no changes satisfying the synchronization algorithm were done), it would be much faster to perform manual partial index update inside data migrations or even manually after the deploy.

Also, there is always full reset alternative with rake chewy:reset.

Parallelizing rake tasks

Every task described above has its own parallel version. Every parallel rake task takes the number for processes for execution as the first argument and the rest of the arguments are exactly the same as for the non-parallel task version.

https://github.com/grosser/parallel gem is required to use these tasks.

If the number of processes is not specified explicitly - parallel gem tries to automatically derive the number of processes to use.

rake chewy:parallel:reset
rake chewy:parallel:upgrade[4]
rake chewy:parallel:update[4,cities]
rake chewy:parallel:sync[4,-users]
rake chewy:parallel:deploy[4] # performs parallel upgrade and parallel sync afterwards

chewy:journal

This namespace contains two tasks for the journal manipulations: chewy:journal:apply and chewy:journal:clean. Both are taking time as the first argument (optional for clean) and a list of indexes exactly as the tasks above. Time can be in any format parsable by ActiveSupport.

rake chewy:journal:apply["$(date -v-1H -u +%FT%TZ)"] # apply journaled changes for the past hour
rake chewy:journal:apply["$(date -v-1H -u +%FT%TZ)",users] # apply journaled changes for the past hour on UsersIndex only

RSpec integration

Just add require 'chewy/rspec' to your spec_helper.rb and you will get additional features:

update_index helper mock_elasticsearch_response helper to mock elasticsearch response mock_elasticsearch_response_sources helper to mock elasticsearch response sources build_query matcher to compare request and expected query (returns true/false)

To use mock_elasticsearch_response and mock_elasticsearch_response_sources helpers add include Chewy::Rspec::Helpers to your tests.

See chewy/rspec/ for more details.

Minitest integration

Add require 'chewy/minitest' to your test_helper.rb, and then for tests which you'd like indexing test hooks, include Chewy::Minitest::Helpers.

Since you can set :bypass strategy for test suites and manually handle import for the index and manually flush test indices using Chewy.massacre. This will help reduce unnecessary ES requests

But if you require chewy to index/update model regularly in your test suite then you can specify :urgent strategy for documents indexing. Add Chewy.strategy(:urgent) to test_helper.rb.

Also, you can use additional helpers:

mock_elasticsearch_response to mock elasticsearch response mock_elasticsearch_response_sources to mock elasticsearch response sources assert_elasticsearch_query to compare request and expected query (returns true/false)

See chewy/minitest/ for more details.

DatabaseCleaner

If you use DatabaseCleaner in your tests with the transaction strategy, you may run into the problem that ActiveRecord's models are not indexed automatically on save despite the fact that you set the callbacks to do this with the update_index method. The issue arises because chewy indices data on after_commit run as default, but all after_commit callbacks are not run with the DatabaseCleaner's' transaction strategy. You can solve this issue by changing the Chewy.use_after_commit_callbacks option. Just add the following initializer in your Rails application:

#config/initializers/chewy.rb
Chewy.use_after_commit_callbacks = !Rails.env.test?

Contributing

  1. Fork it (http://github.com/toptal/chewy/fork)
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Implement your changes, cover it with specs and make sure old specs are passing
  4. Commit your changes (git commit -am 'Add some feature')
  5. Push to the branch (git push origin my-new-feature)
  6. Create new Pull Request

Use the following Rake tasks to control the Elasticsearch cluster while developing, if you prefer native Elasticsearch installation over the dockerized one:

rake elasticsearch:start # start Elasticsearch cluster on 9250 port for tests
rake elasticsearch:stop # stop Elasticsearch

Author: toptal
Source code: https://github.com/toptal/chewy
License: MIT license

#ruby 

Chewy: An ODM (Object Document Mapper) on Ruby
Rocio  O'Keefe

Rocio O'Keefe

1659721320

Citizens_default_config: A New Citizens Package

TODO: Put a short description of the package here that helps potential users know whether this package might be useful for them.

Features

TODO: List what your package can do. Maybe include images, gifs, or videos.

Getting started

TODO: List prerequisites and provide or point to information on how to start using the package.

Usage

TODO: Include short and useful examples for package users. Add longer examples to /example folder.

const like = 'sample';

Additional information

TODO: Tell users more about the package: where to find more information, how to contribute to the package, how to file issues, what response they can expect from the package authors, and more.

Installing

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add citizens_default_config

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  citizens_default_config: ^0.0.1

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:citizens_default_config/citizens_default_config.dart';

Original article source at: https://pub.dev/packages/citizens_default_config 

#flutter #dart #config 

Citizens_default_config: A New Citizens Package
Sasha  Roberts

Sasha Roberts

1659372869

Split: The Rack Based A/B Testing Framework with Ruby on Rails

Split

     📈 The Rack Based A/B testing framework https://libraries.io/rubygems/split

Split is a rack based A/B testing framework designed to work with Rails, Sinatra or any other rack based app.

Split is heavily inspired by the Abingo and Vanity Rails A/B testing plugins and Resque in its use of Redis.

Split is designed to be hacker friendly, allowing for maximum customisation and extensibility.

Install

Requirements

Split v4.0+ is currently tested with Ruby >= 2.5 and Rails >= 5.2.

If your project requires compatibility with Ruby 2.4.x or older Rails versions. You can try v3.0 or v0.8.0(for Ruby 1.9.3)

Split uses Redis as a datastore.

Split only supports Redis 4.0 or greater.

If you're on OS X, Homebrew is the simplest way to install Redis:

brew install redis
redis-server /usr/local/etc/redis.conf

You now have a Redis daemon running on port 6379.

Setup

gem install split

Rails

Adding gem 'split' to your Gemfile will autoload it when rails starts up, as long as you've configured Redis it will 'just work'.

Sinatra

To configure Sinatra with Split you need to enable sessions and mix in the helper methods. Add the following lines at the top of your Sinatra app:

require 'split'

class MySinatraApp < Sinatra::Base
  enable :sessions
  helpers Split::Helper

  get '/' do
  ...
end

Usage

To begin your A/B test use the ab_test method, naming your experiment with the first argument and then the different alternatives which you wish to test on as the other arguments.

ab_test returns one of the alternatives, if a user has already seen that test they will get the same alternative as before, which you can use to split your code on.

It can be used to render different templates, show different text or any other case based logic.

ab_finished is used to make a completion of an experiment, or conversion.

Example: View

<% ab_test(:login_button, "/images/button1.jpg", "/images/button2.jpg") do |button_file| %>
  <%= image_tag(button_file, alt: "Login!") %>
<% end %>

Example: Controller

def register_new_user
  # See what level of free points maximizes users' decision to buy replacement points.
  @starter_points = ab_test(:new_user_free_points, '100', '200', '300')
end

Example: Conversion tracking (in a controller!)

def buy_new_points
  # some business logic
  ab_finished(:new_user_free_points)
end

Example: Conversion tracking (in a view)

Thanks for signing up, dude! <% ab_finished(:signup_page_redesign) %>

You can find more examples, tutorials and guides on the wiki.

Statistical Validity

Split has two options for you to use to determine which alternative is the best.

The first option (default on the dashboard) uses a z test (n>30) for the difference between your control and alternative conversion rates to calculate statistical significance. This test will tell you whether an alternative is better or worse than your control, but it will not distinguish between which alternative is the best in an experiment with multiple alternatives. Split will only tell you if your experiment is 90%, 95%, or 99% significant, and this test only works if you have more than 30 participants and 5 conversions for each branch.

As per this blog post on the pitfalls of A/B testing, it is highly recommended that you determine your requisite sample size for each branch before running the experiment. Otherwise, you'll have an increased rate of false positives (experiments which show a significant effect where really there is none).

Here is a sample size calculator for your convenience.

The second option uses simulations from a beta distribution to determine the probability that the given alternative is the winner compared to all other alternatives. You can view these probabilities by clicking on the drop-down menu labeled "Confidence." This option should be used when the experiment has more than just 1 control and 1 alternative. It can also be used for a simple, 2-alternative A/B test.

Calculating the beta-distribution simulations for a large number of experiments can be slow, so the results are cached. You can specify how often they should be recalculated (the default is once per day).

Split.configure do |config|
  config.winning_alternative_recalculation_interval = 3600 # 1 hour
end

Extras

Weighted alternatives

Perhaps you only want to show an alternative to 10% of your visitors because it is very experimental or not yet fully load tested.

To do this you can pass a weight with each alternative in the following ways:

ab_test(:homepage_design, {'Old' => 18}, {'New' => 2})

ab_test(:homepage_design, 'Old', {'New' => 1.0/9})

ab_test(:homepage_design, {'Old' => 9}, 'New')

This will only show the new alternative to visitors 1 in 10 times, the default weight for an alternative is 1.

Overriding alternatives

For development and testing, you may wish to force your app to always return an alternative. You can do this by passing it as a parameter in the url.

If you have an experiment called button_color with alternatives called red and blue used on your homepage, a url such as:

http://myawesomesite.com?ab_test[button_color]=red

will always have red buttons. This won't be stored in your session or count towards to results, unless you set the store_override configuration option.

In the event you want to disable all tests without having to know the individual experiment names, add a SPLIT_DISABLE query parameter.

http://myawesomesite.com?SPLIT_DISABLE=true

It is not required to send SPLIT_DISABLE=false to activate Split.

Rspec Helper

To aid testing with RSpec, write spec/support/split_helper.rb and call use_ab_test(alternatives_by_experiment) in your specs as instructed below:

# Create a file with these contents at 'spec/support/split_helper.rb'
# and ensure it is `require`d in your rails_helper.rb or spec_helper.rb
module SplitHelper

  # Force a specific experiment alternative to always be returned:
  #   use_ab_test(signup_form: "single_page")
  #
  # Force alternatives for multiple experiments:
  #   use_ab_test(signup_form: "single_page", pricing: "show_enterprise_prices")
  #
  def use_ab_test(alternatives_by_experiment)
    allow_any_instance_of(Split::Helper).to receive(:ab_test) do |_receiver, experiment, &block|
      variant = alternatives_by_experiment.fetch(experiment) { |key| raise "Unknown experiment '#{key}'" }
      block.call(variant) unless block.nil?
      variant
    end
  end
end

# Make the `use_ab_test` method available to all specs:
RSpec.configure do |config|
  config.include SplitHelper
end

Now you can call use_ab_test(alternatives_by_experiment) in your specs, for example:

it "registers using experimental signup" do
  use_ab_test experiment_name: "alternative_name"
  post "/signups"
  ...
end

Starting experiments manually

By default new A/B tests will be active right after deployment. In case you would like to start new test a while after the deploy, you can do it by setting the start_manually configuration option to true.

After choosing this option tests won't be started right after deploy, but after pressing the Start button in Split admin dashboard. If a test is deleted from the Split dashboard, then it can only be started after pressing the Start button whenever being re-initialized.

Reset after completion

When a user completes a test their session is reset so that they may start the test again in the future.

To stop this behaviour you can pass the following option to the ab_finished method:

ab_finished(:experiment_name, reset: false)

The user will then always see the alternative they started with.

Any old unfinished experiment key will be deleted from the user's data storage if the experiment had been removed or is over and a winner had been chosen. This allows a user to enroll into any new experiment in cases when the allow_multiple_experiments config option is set to false.

Reset experiments manually

By default Split automatically resets the experiment whenever it detects the configuration for an experiment has changed (e.g. you call ab_test with different alternatives). You can prevent this by setting the option reset_manually to true.

You may want to do this when you want to change something, like the variants' names, the metadata about an experiment, etc. without resetting everything.

Multiple experiments at once

By default Split will avoid users participating in multiple experiments at once. This means you are less likely to skew results by adding in more variation to your tests.

To stop this behaviour and allow users to participate in multiple experiments at once set the allow_multiple_experiments config option to true like so:

Split.configure do |config|
  config.allow_multiple_experiments = true
end

This will allow the user to participate in any number of experiments and belong to any alternative in each experiment. This has the possible downside of a variation in one experiment influencing the outcome of another.

To address this, setting the allow_multiple_experiments config option to 'control' like so:

Split.configure do |config|
  config.allow_multiple_experiments = 'control'
end

For this to work, each and every experiment you define must have an alternative named 'control'. This will allow the user to participate in multiple experiments as long as the user belongs to the alternative 'control' in each experiment. As soon as the user belongs to an alternative named something other than 'control' the user may not participate in any more experiments. Calling ab_test() will always return the first alternative without adding the user to that experiment.

Experiment Persistence

Split comes with three built-in persistence adapters for storing users and the alternatives they've been given for each experiment.

By default Split will store the tests for each user in the session.

You can optionally configure Split to use a cookie, Redis, or any custom adapter of your choosing.

Cookies

Split.configure do |config|
  config.persistence = :cookie
end

When using the cookie persistence, Split stores data into an anonymous tracking cookie named 'split', which expires in 1 year. To change that, set the persistence_cookie_length in the configuration (unit of time in seconds).

Split.configure do |config|
  config.persistence = :cookie
  config.persistence_cookie_length = 2592000 # 30 days
end

The data stored consists of the experiment name and the variants the user is in. Example: { "experiment_name" => "variant_a" }

Note: Using cookies depends on ActionDispatch::Cookies or any identical API

Redis

Using Redis will allow ab_users to persist across sessions or machines.

Split.configure do |config|
  config.persistence = Split::Persistence::RedisAdapter.with_config(lookup_by: -> (context) { context.current_user_id })
  # Equivalent
  # config.persistence = Split::Persistence::RedisAdapter.with_config(lookup_by: :current_user_id)
end

Options:

  • lookup_by: method to invoke per request for uniquely identifying ab_users (mandatory configuration)
  • namespace: separate namespace to store these persisted values (default "persistence")
  • expire_seconds: sets TTL for user key. (if a user is in multiple experiments most recent update will reset TTL for all their assignments)

Dual Adapter

The Dual Adapter allows the use of different persistence adapters for logged-in and logged-out users. A common use case is to use Redis for logged-in users and Cookies for logged-out users.

cookie_adapter = Split::Persistence::CookieAdapter
redis_adapter = Split::Persistence::RedisAdapter.with_config(
    lookup_by: -> (context) { context.send(:current_user).try(:id) },
    expire_seconds: 2592000)

Split.configure do |config|
  config.persistence = Split::Persistence::DualAdapter.with_config(
      logged_in: -> (context) { !context.send(:current_user).try(:id).nil? },
      logged_in_adapter: redis_adapter,
      logged_out_adapter: cookie_adapter)
  config.persistence_cookie_length = 2592000 # 30 days
end

Custom Adapter

Your custom adapter needs to implement the same API as existing adapters. See Split::Persistence::CookieAdapter or Split::Persistence::SessionAdapter for a starting point.

Split.configure do |config|
  config.persistence = YourCustomAdapterClass
end

Trial Event Hooks

You can define methods that will be called at the same time as experiment alternative participation and goal completion.

For example:

Split.configure do |config|
  config.on_trial  = :log_trial # run on every trial
  config.on_trial_choose   = :log_trial_choose # run on trials with new users only
  config.on_trial_complete = :log_trial_complete
end

Set these attributes to a method name available in the same context as the ab_test method. These methods should accept one argument, a Trial instance.

def log_trial(trial)
  logger.info "experiment=%s alternative=%s user=%s" %
    [ trial.experiment.name, trial.alternative, current_user.id ]
end

def log_trial_choose(trial)
  logger.info "[new user] experiment=%s alternative=%s user=%s" %
    [ trial.experiment.name, trial.alternative, current_user.id ]
end

def log_trial_complete(trial)
  logger.info "experiment=%s alternative=%s user=%s complete=true" %
    [ trial.experiment.name, trial.alternative, current_user.id ]
end

Views

If you are running ab_test from a view, you must define your event hook callback as a helper_method in the controller:

helper_method :log_trial_choose

def log_trial_choose(trial)
  logger.info "experiment=%s alternative=%s user=%s" %
    [ trial.experiment.name, trial.alternative, current_user.id ]
end

Experiment Hooks

You can assign a proc that will be called when an experiment is reset or deleted. You can use these hooks to call methods within your application to keep data related to experiments in sync with Split.

For example:

Split.configure do |config|
  # after experiment reset or deleted
  config.on_experiment_reset  = -> (example) { # Do something on reset }
  config.on_experiment_delete = -> (experiment) { # Do something else on delete }
  # before experiment reset or deleted
  config.on_before_experiment_reset  = -> (example) { # Do something on reset }
  config.on_before_experiment_delete = -> (experiment) { # Do something else on delete }
  # after experiment winner had been set
  config.on_experiment_winner_choose = -> (experiment) { # Do something on winner choose }
end

Web Interface

Split comes with a Sinatra-based front end to get an overview of how your experiments are doing.

If you are running Rails 2: You can mount this inside your app using Rack::URLMap in your config.ru

require 'split/dashboard'

run Rack::URLMap.new \
  "/"       => Your::App.new,
  "/split" => Split::Dashboard.new

However, if you are using Rails 3 or higher: You can mount this inside your app routes by first adding this to the Gemfile:

gem 'split', require: 'split/dashboard'

Then adding this to config/routes.rb

mount Split::Dashboard, at: 'split'

You may want to password protect that page, you can do so with Rack::Auth::Basic (in your split initializer file)

# Rails apps or apps that already depend on activesupport
Split::Dashboard.use Rack::Auth::Basic do |username, password|
  # Protect against timing attacks:
  # - Use & (do not use &&) so that it doesn't short circuit.
  # - Use digests to stop length information leaking
  ActiveSupport::SecurityUtils.secure_compare(::Digest::SHA256.hexdigest(username), ::Digest::SHA256.hexdigest(ENV["SPLIT_USERNAME"])) &
    ActiveSupport::SecurityUtils.secure_compare(::Digest::SHA256.hexdigest(password), ::Digest::SHA256.hexdigest(ENV["SPLIT_PASSWORD"]))
end

# Apps without activesupport
Split::Dashboard.use Rack::Auth::Basic do |username, password|
  # Protect against timing attacks:
  # - Use & (do not use &&) so that it doesn't short circuit.
  # - Use digests to stop length information leaking
  Rack::Utils.secure_compare(::Digest::SHA256.hexdigest(username), ::Digest::SHA256.hexdigest(ENV["SPLIT_USERNAME"])) &
    Rack::Utils.secure_compare(::Digest::SHA256.hexdigest(password), ::Digest::SHA256.hexdigest(ENV["SPLIT_PASSWORD"]))
end

You can even use Devise or any other Warden-based authentication method to authorize users. Just replace mount Split::Dashboard, :at => 'split' in config/routes.rb with the following:

match "/split" => Split::Dashboard, anchor: false, via: [:get, :post, :delete], constraints: -> (request) do
  request.env['warden'].authenticated? # are we authenticated?
  request.env['warden'].authenticate! # authenticate if not already
  # or even check any other condition such as request.env['warden'].user.is_admin?
end

More information on this here

Screenshot

split_screenshot

Configuration

You can override the default configuration options of Split like so:

Split.configure do |config|
  config.db_failover = true # handle Redis errors gracefully
  config.db_failover_on_db_error = -> (error) { Rails.logger.error(error.message) }
  config.allow_multiple_experiments = true
  config.enabled = true
  config.persistence = Split::Persistence::SessionAdapter
  #config.start_manually = false ## new test will have to be started manually from the admin panel. default false
  #config.reset_manually = false ## if true, it never resets the experiment data, even if the configuration changes
  config.include_rails_helper = true
  config.redis = "redis://custom.redis.url:6380"
end

Split looks for the Redis host in the environment variable REDIS_URL then defaults to redis://localhost:6379 if not specified by configure block.

On platforms like Heroku, Split will use the value of REDIS_PROVIDER to determine which env variable key to use when retrieving the host config. This defaults to REDIS_URL.

Filtering

In most scenarios you don't want to have AB-Testing enabled for web spiders, robots or special groups of users. Split provides functionality to filter this based on a predefined, extensible list of bots, IP-lists or custom exclude logic.

Split.configure do |config|
  # bot config
  config.robot_regex = /my_custom_robot_regex/ # or
  config.bots['newbot'] = "Description for bot with 'newbot' user agent, which will be added to config.robot_regex for exclusion"

  # IP config
  config.ignore_ip_addresses << '81.19.48.130' # or regex: /81\.19\.48\.[0-9]+/

  # or provide your own filter functionality, the default is proc{ |request| is_robot? || is_ignored_ip_address? || is_preview? }
  config.ignore_filter = -> (request) { CustomExcludeLogic.excludes?(request) }
end

Experiment configuration

Instead of providing the experiment options inline, you can store them in a hash. This hash can control your experiment's alternatives, weights, algorithm and if the experiment resets once finished:

Split.configure do |config|
  config.experiments = {
    my_first_experiment: {
      alternatives: ["a", "b"],
      resettable: false
    },
    :my_second_experiment => {
      algorithm: 'Split::Algorithms::Whiplash',
      alternatives: [
        { name: "a", percent: 67 },
        { name: "b", percent: 33 }
      ]
    }
  }
end

You can also store your experiments in a YAML file:

Split.configure do |config|
  config.experiments = YAML.load_file "config/experiments.yml"
end

You can then define the YAML file like:

my_first_experiment:
  alternatives:
    - a
    - b
my_second_experiment:
  alternatives:
    - name: a
      percent: 67
    - name: b
      percent: 33
  resettable: false

This simplifies the calls from your code:

ab_test(:my_first_experiment)

and:

ab_finished(:my_first_experiment)

You can also add meta data for each experiment, which is very useful when you need more than an alternative name to change behaviour:

Split.configure do |config|
  config.experiments = {
    my_first_experiment: {
      alternatives: ["a", "b"],
      metadata: {
        "a" => {"text" => "Have a fantastic day"},
        "b" => {"text" => "Don't get hit by a bus"}
      }
    }
  }
end
my_first_experiment:
  alternatives:
    - a
    - b
  metadata:
    a:
      text: "Have a fantastic day"
    b:
      text: "Don't get hit by a bus"

This allows for some advanced experiment configuration using methods like:

trial.alternative.name # => "a"

trial.metadata['text'] # => "Have a fantastic day"

or in views:

<% ab_test("my_first_experiment") do |alternative, meta| %>
  <%= alternative %>
  <small><%= meta['text'] %></small>
<% end %>

The keys used in meta data should be Strings

Metrics

You might wish to track generic metrics, such as conversions, and use those to complete multiple different experiments without adding more to your code. You can use the configuration hash to do this, thanks to the :metric option.

Split.configure do |config|
  config.experiments = {
    my_first_experiment: {
      alternatives: ["a", "b"],
      metric: :my_metric
    }
  }
end

Your code may then track a completion using the metric instead of the experiment name:

ab_finished(:my_metric)

You can also create a new metric by instantiating and saving a new Metric object.

Split::Metric.new(:my_metric)
Split::Metric.save

Goals

You might wish to allow an experiment to have multiple, distinguishable goals. The API to define goals for an experiment is this:

ab_test({link_color: ["purchase", "refund"]}, "red", "blue")

or you can define them in a configuration file:

Split.configure do |config|
  config.experiments = {
    link_color: {
      alternatives: ["red", "blue"],
      goals: ["purchase", "refund"]
    }
  }
end

To complete a goal conversion, you do it like:

ab_finished(link_color: "purchase")

Note that if you pass additional options, that should be a separate hash:

ab_finished({ link_color: "purchase" }, reset: false)

NOTE: This does not mean that a single experiment can complete more than one goal.

Once you finish one of the goals, the test is considered to be completed, and finishing the other goal will no longer register. (Assuming the test runs with reset: false.)

Good Example: Test if listing Plan A first result in more conversions to Plan A (goal: "plana_conversion") or Plan B (goal: "planb_conversion").

Bad Example: Test if button color increases conversion rate through multiple steps of a funnel. THIS WILL NOT WORK.

Bad Example: Test both how button color affects signup and how it affects login, at the same time. THIS WILL NOT WORK.

Combined Experiments

If you want to test how button color affects signup and how it affects login at the same time, use combined experiments. Configure like so:

  Split.configuration.experiments = {
        :button_color_experiment => {
          :alternatives => ["blue", "green"],
          :combined_experiments => ["button_color_on_signup", "button_color_on_login"]
        }
      }

Starting the combined test starts all combined experiments

 ab_combined_test(:button_color_experiment)

Finish each combined test as normal

   ab_finished(:button_color_on_login)
   ab_finished(:button_color_on_signup)

Additional Configuration:

  • Be sure to enable allow_multiple_experiments
  • In Sinatra include the CombinedExperimentsHelper
  helpers Split::CombinedExperimentsHelper

DB failover solution

Due to the fact that Redis has no automatic failover mechanism, it's possible to switch on the db_failover config option, so that ab_test and ab_finished will not crash in case of a db failure. ab_test always delivers alternative A (the first one) in that case.

It's also possible to set a db_failover_on_db_error callback (proc) for example to log these errors via Rails.logger.

Redis

You may want to change the Redis host and port Split connects to, or set various other options at startup.

Split has a redis setter which can be given a string or a Redis object. This means if you're already using Redis in your app, Split can re-use the existing connection.

String: Split.redis = 'redis://localhost:6379'

Redis: Split.redis = $redis

For our rails app we have a config/initializers/split.rb file where we load config/split.yml by hand and set the Redis information appropriately.

Here's our config/split.yml:

development: redis://localhost:6379
test: redis://localhost:6379
staging: redis://redis1.example.com:6379
fi: redis://localhost:6379
production: redis://redis1.example.com:6379

And our initializer:

split_config = YAML.load_file(Rails.root.join('config', 'split.yml'))
Split.redis = split_config[Rails.env]

Redis Caching (v4.0+)

In some high-volume usage scenarios, Redis load can be incurred by repeated fetches for fairly static data. Enabling caching will reduce this load.

Split.configuration.cache = true

This currently caches:

  • Split::ExperimentCatalog.find
  • Split::Experiment.start_time
  • Split::Experiment.winner

Namespaces

If you're running multiple, separate instances of Split you may want to namespace the keyspaces so they do not overlap. This is not unlike the approach taken by many memcached clients.

This feature can be provided by the redis-namespace library. To configure Split to use Redis::Namespace, do the following:

  1. Add redis-namespace to your Gemfile:
gem 'redis-namespace'
  1. Configure Split.redis to use a Redis::Namespace instance (possible in an initializer):
redis = Redis.new(url: ENV['REDIS_URL']) # or whatever config you want
Split.redis = Redis::Namespace.new(:your_namespace, redis: redis)

Outside of a Web Session

Split provides the Helper module to facilitate running experiments inside web sessions.

Alternatively, you can access the underlying Metric, Trial, Experiment and Alternative objects to conduct experiments that are not tied to a web session.

# create a new experiment
experiment = Split::ExperimentCatalog.find_or_create('color', 'red', 'blue')
# create a new trial
trial = Split::Trial.new(:experiment => experiment)
# run trial
trial.choose!
# get the result, returns either red or blue
trial.alternative.name

# if the goal has been achieved, increment the successful completions for this alternative.
if goal_achieved?
  trial.complete!
end

Algorithms

By default, Split ships with Split::Algorithms::WeightedSample that randomly selects from possible alternatives for a traditional a/b test. It is possible to specify static weights to favor certain alternatives.

Split::Algorithms::Whiplash is an implementation of a multi-armed bandit algorithm. This algorithm will automatically weight the alternatives based on their relative performance, choosing the better-performing ones more often as trials are completed.

Split::Algorithms::BlockRandomization is an algorithm that ensures equal participation across all alternatives. This algorithm will choose the alternative with the fewest participants. In the event of multiple minimum participant alternatives (i.e. starting a new "Block") the algorithm will choose a random alternative from those minimum participant alternatives.

Users may also write their own algorithms. The default algorithm may be specified globally in the configuration file, or on a per experiment basis using the experiments hash of the configuration file.

To change the algorithm globally for all experiments, use the following in your initializer:

Split.configure do |config|
  config.algorithm = Split::Algorithms::Whiplash
end

Extensions

Screencast

Ryan bates has produced an excellent 10 minute screencast about split on the Railscasts site: A/B Testing with Split

Blogposts

Backers

Support us with a monthly donation and help us continue our activities. [Become a backer]

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]

Contribute

Please do! Over 70 different people have contributed to the project, you can see them all here: https://github.com/splitrb/split/graphs/contributors.

Development

The source code is hosted at GitHub.

Report issues and feature requests on GitHub Issues.

You can find a discussion form on Google Groups.

Tests

Run the tests like this:

# Start a Redis server in another tab.
redis-server

bundle
rake spec

A Note on Patches and Pull Requests

  • Fork the project.
  • Make your feature addition or bug fix.
  • Add tests for it. This is important so I don't break it in a future version unintentionally.
  • Add documentation if necessary.
  • Commit. Do not mess with the rakefile, version, or history. (If you want to have your own version, that is fine. But bump the version in a commit by itself, which I can ignore when I pull.)
  • Send a pull request. Bonus points for topic branches.

Code of Conduct

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.


Author: splitrb
Source code: https://github.com/splitrb/split
License: MIT license

#ruby  #ruby-on-rails #redis 

Split: The Rack Based A/B Testing Framework with Ruby on Rails

Asset Sync: Synchronises Assets Between Rails and S3

Asset Sync

Synchronises Assets between Rails and S3.

Asset Sync is built to run with the new Rails Asset Pipeline feature introduced in Rails 3.1. After you run bundle exec rake assets:precompile your assets will be synchronised to your S3 bucket, optionally deleting unused files and only uploading the files it needs to.

This was initially built and is intended to work on Heroku but can work on any platform.

Upgrading?

Upgraded from 1.x? Read UPGRADING.md

Installation

Since 2.x, Asset Sync depends on gem fog-core instead of fog.
This is due to fog is including many unused storage provider gems as its dependencies.

Asset Sync has no idea about what provider will be used,
so you are responsible for bundling the right gem for the provider to be used.

In your Gemfile:

gem "asset_sync"
gem "fog-aws"

Or, to use Azure Blob storage, configure as this.

gem "asset_sync"
gem "gitlab-fog-azure-rm"

# This gem seems unmaintianed
# gem "fog-azure-rm"

To use Backblaze B2, insert these.

gem "asset_sync"
gem "fog-backblaze"

Extended Installation (Faster sync with turbosprockets)

It's possible to improve asset:precompile time if you are using Rails 3.2.x the main source of which being compilation of non-digest assets.

turbo-sprockets-rails3 solves this by only compiling digest assets. Thus cutting compile time in half.

NOTE: It will be deprecated in Rails 4 as sprockets-rails has been extracted out of Rails and will only compile digest assets by default.

Configuration

Rails

Configure config/environments/production.rb to use Amazon S3 as the asset host and ensure precompiling is enabled.

  #config/environments/production.rb
  config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com"

Or, to use Google Storage Cloud, configure as this.

  #config/environments/production.rb
  config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.storage.googleapis.com"

Or, to use Azure Blob storage, configure as this.

  #config/environments/production.rb
  config.action_controller.asset_host = "//#{ENV['AZURE_STORAGE_ACCOUNT_NAME']}.blob.core.windows.net/#{ENV['FOG_DIRECTORY']}"

Or, to use Backblaze B2, configure as this.

  #config/environments/production.rb
  config.action_controller.asset_host = "//f000.backblazeb2.com/file/#{ENV['FOG_DIRECTORY']}"

On HTTPS: the exclusion of any protocol in the asset host declaration above will allow browsers to choose the transport mechanism on the fly. So if your application is available under both HTTP and HTTPS the assets will be served to match.

The only caveat with this is that your S3 bucket name must not contain any periods so, mydomain.com.s3.amazonaws.com for example would not work under HTTPS as SSL certificates from Amazon would interpret our bucket name as not a subdomain of s3.amazonaws.com, but a multi level subdomain. To avoid this don't use a period in your subdomain or switch to the other style of S3 URL.

  config.action_controller.asset_host = "//s3.amazonaws.com/#{ENV['FOG_DIRECTORY']}"

Or, to use Google Storage Cloud, configure as this.

  config.action_controller.asset_host = "//storage.googleapis.com/#{ENV['FOG_DIRECTORY']}"

Or, to use Azure Blob storage, configure as this.

  #config/environments/production.rb
  config.action_controller.asset_host = "//#{ENV['AZURE_STORAGE_ACCOUNT_NAME']}.blob.core.windows.net/#{ENV['FOG_DIRECTORY']}"

On non default S3 bucket region: If your bucket is set to a region that is not the default US Standard (us-east-1) you must use the first style of url //#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com or amazon will return a 301 permanently moved when assets are requested. Note the caveat above about bucket names and periods.

If you wish to have your assets sync to a sub-folder of your bucket instead of into the root add the following to your production.rb file

  # store assets in a 'folder' instead of bucket root
  config.assets.prefix = "/production/assets"

Also, ensure the following are defined (in production.rb or application.rb)

  • config.assets.digest is set to true.
  • config.assets.enabled is set to true.

Additionally, if you depend on any configuration that is setup in your initializers you will need to ensure that

  • config.assets.initialize_on_precompile is set to true

AssetSync

AssetSync supports the following methods of configuration.

Using the Built-in Initializer is the default method and is supposed to be used with environment variables. It's the recommended approach for deployments on Heroku.

If you need more control over configuration you will want to use a custom rails initializer.

Configuration using a YAML file (a common strategy for Capistrano deployments) is also supported.

The recommend way to configure asset_sync is by using environment variables however it's up to you, it will work fine if you hard code them too. The main reason why using environment variables is recommended is so your access keys are not checked into version control.

Built-in Initializer (Environment Variables)

The Built-in Initializer will configure AssetSync based on the contents of your environment variables.

Add your configuration details to heroku

heroku config:add AWS_ACCESS_KEY_ID=xxxx
heroku config:add AWS_SECRET_ACCESS_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
heroku config:add FOG_PROVIDER=AWS
# and optionally:
heroku config:add FOG_REGION=eu-west-1
heroku config:add ASSET_SYNC_GZIP_COMPRESSION=true
heroku config:add ASSET_SYNC_MANIFEST=true
heroku config:add ASSET_SYNC_EXISTING_REMOTE_FILES=keep

Or add to a traditional unix system

export AWS_ACCESS_KEY_ID=xxxx
export AWS_SECRET_ACCESS_KEY=xxxx
export FOG_DIRECTORY=xxxx

Rackspace configuration is also supported

heroku config:add RACKSPACE_USERNAME=xxxx
heroku config:add RACKSPACE_API_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
heroku config:add FOG_PROVIDER=Rackspace

Google Storage Cloud configuration is supported as well. The preferred option is using the GCS JSON API which requires that you create an appropriate service account, generate the signatures and make them accessible to asset sync at the prescribed location

heroku config:add FOG_PROVIDER=Google
heroku config:add GOOGLE_PROJECT=xxxx
heroku config:add GOOGLE_JSON_KEY_LOCATION=xxxx
heroku config:add FOG_DIRECTORY=xxxx

If using the S3 API the following config is required

heroku config:add FOG_PROVIDER=Google
heroku config:add GOOGLE_STORAGE_ACCESS_KEY_ID=xxxx
heroku config:add GOOGLE_STORAGE_SECRET_ACCESS_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx

The Built-in Initializer also sets the AssetSync default for existing_remote_files to keep.

Custom Rails Initializer (config/initializers/asset_sync.rb)

If you want to enable some of the advanced configuration options you will want to create your own initializer.

Run the included Rake task to generate a starting point.

rails g asset_sync:install --provider=Rackspace
rails g asset_sync:install --provider=AWS
rails g asset_sync:install --provider=AzureRM
rails g asset_sync:install --provider=Backblaze

The generator will create a Rails initializer at config/initializers/asset_sync.rb.

AssetSync.configure do |config|
  config.fog_provider = 'AWS'
  config.fog_directory = ENV['FOG_DIRECTORY']
  config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
  config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
  config.aws_session_token = ENV['AWS_SESSION_TOKEN'] if ENV.key?('AWS_SESSION_TOKEN')

  # Don't delete files from the store
  # config.existing_remote_files = 'keep'
  #
  # Increase upload performance by configuring your region
  # config.fog_region = 'eu-west-1'
  #
  # Set `public` option when uploading file depending on value,
  # Setting to "default" makes asset sync skip setting the option
  # Possible values: true, false, "default" (default: true)
  # config.fog_public = true
  #
  # Change AWS signature version. Default is 4
  # config.aws_signature_version = 4
  #
  # Change canned ACL of uploaded object. Default is unset. Will override fog_public if set.
  # Choose from: private | public-read | public-read-write | aws-exec-read |
  #              authenticated-read | bucket-owner-read | bucket-owner-full-control 
  # config.aws_acl = nil 
  #
  # Change host option in fog (only if you need to)
  # config.fog_host = 's3.amazonaws.com'
  #
  # Change port option in fog (only if you need to)
  # config.fog_port = "9000"
  #
  # Use http instead of https.
  # config.fog_scheme = 'http'
  #
  # Automatically replace files with their equivalent gzip compressed version
  # config.gzip_compression = true
  #
  # Use the Rails generated 'manifest.yml' file to produce the list of files to
  # upload instead of searching the assets directory.
  # config.manifest = true
  #
  # Upload the manifest file also.
  # config.include_manifest = false
  #
  # Upload files concurrently
  # config.concurrent_uploads = false
  #
  # Number of threads when concurrent_uploads is enabled
  # config.concurrent_uploads_max_threads = 10
  #
  # Path to cache file to skip scanning remote
  # config.remote_file_list_cache_file_path = './.asset_sync_remote_file_list_cache.json'
  #
  # Fail silently.  Useful for environments such as Heroku
  # config.fail_silently = true
  #
  # Log silently. Default is `true`. But you can set it to false if more logging message are preferred.
  # Logging messages are sent to `STDOUT` when `log_silently` is falsy
  # config.log_silently = true
  #
  # Allow custom assets to be cacheable. Note: The base filename will be matched
  # If you have an asset with name `app.0b1a4cd3.js`, only `app.0b1a4cd3` will need to be matched
  # only one of `cache_asset_regexp` or `cache_asset_regexps` is allowed.
  # config.cache_asset_regexp = /\.[a-f0-9]{8}$/i
  # config.cache_asset_regexps = [ /\.[a-f0-9]{8}$/i, /\.[a-f0-9]{20}$/i ]
end

YAML (config/asset_sync.yml)

Run the included Rake task to generate a starting point.

rails g asset_sync:install --use-yml --provider=Rackspace
rails g asset_sync:install --use-yml --provider=AWS
rails g asset_sync:install --use-yml --provider=AzureRM
rails g asset_sync:install --use-yml --provider=Backblaze

The generator will create a YAML file at config/asset_sync.yml.

defaults: &defaults
  fog_provider: "AWS"
  fog_directory: "rails-app-assets"
  aws_access_key_id: "<%= ENV['AWS_ACCESS_KEY_ID'] %>"
  aws_secret_access_key: "<%= ENV['AWS_SECRET_ACCESS_KEY'] %>"

  # To use AWS reduced redundancy storage.
  # aws_reduced_redundancy: true
  #
  # You may need to specify what region your storage bucket is in
  # fog_region: "eu-west-1"
  #
  # Change AWS signature version. Default is 4
  # aws_signature_version: 4
  #
  # Change canned ACL of uploaded object. Default is unset. Will override fog_public if set.
  # Choose from: private | public-read | public-read-write | aws-exec-read |
  #              authenticated-read | bucket-owner-read | bucket-owner-full-control 
  # aws_acl: null
  #
  # Change host option in fog (only if you need to)
  # fog_host: "s3.amazonaws.com"
  #
  # Use http instead of https. Default should be "https" (at least for fog-aws)
  # fog_scheme: "http"

  existing_remote_files: keep # Existing pre-compiled assets on S3 will be kept
  # To delete existing remote files.
  # existing_remote_files: delete
  # To ignore existing remote files and overwrite.
  # existing_remote_files: ignore
  # Automatically replace files with their equivalent gzip compressed version
  # gzip_compression: true
  # Fail silently.  Useful for environments such as Heroku
  # fail_silently: true
  # Always upload. Useful if you want to overwrite specific remote assets regardless of their existence
  #  eg: Static files in public often reference non-fingerprinted application.css
  #  note: You will still need to expire them from the CDN's edge cache locations
  # always_upload: ['application.js', 'application.css', !ruby/regexp '/application-/\d{32}\.css/']
  # Ignored files. Useful if there are some files that are created dynamically on the server and you don't want to upload on deploy.
  # ignored_files: ['ignore_me.js', !ruby/regexp '/ignore_some/\d{32}\.css/']
  # Allow custom assets to be cacheable. Note: The base filename will be matched
  # If you have an asset with name "app.0b1a4cd3.js", only "app.0b1a4cd3" will need to be matched
  # cache_asset_regexps: ['cache_me.js', !ruby/regexp '/cache_some\.\d{8}\.css/']

development:
  <<: *defaults

test:
  <<: *defaults

production:
  <<: *defaults

Available Configuration Options

Most AssetSync configuration can be modified directly using environment variables with the Built-in initializer. e.g.

AssetSync.config.fog_provider == ENV['FOG_PROVIDER']

Simply upcase the ruby attribute names to get the equivalent environment variable to set. The only exception to that rule are the internal AssetSync config variables, they must be prepended with ASSET_SYNC_* e.g.

AssetSync.config.gzip_compression == ENV['ASSET_SYNC_GZIP_COMPRESSION']

AssetSync (optional)

  • existing_remote_files: ('keep', 'delete', 'ignore') what to do with previously precompiled files. default: 'keep'
  • gzip_compression: (true, false) when enabled, will automatically replace files that have a gzip compressed equivalent with the compressed version. default: 'false'
  • manifest: (true, false) when enabled, will use the manifest.yml generated by Rails to get the list of local files to upload. experimental. default: 'false'
  • include_manifest: (true, false) when enabled, will upload the manifest.yml generated by Rails. default: 'false'
  • concurrent_uploads: (true, false) when enabled, will upload the files in different Threads, this greatly improves the upload speed. default: 'false'
  • concurrent_uploads_max_threads: when concurrent_uploads is enabled, this determines the number of threads that will be created. default: 10
  • remote_file_list_cache_file_path: if present, use this path to cache remote file list to skip scanning remote default: nil
  • enabled: (true, false) when false, will disable asset sync. default: 'true' (enabled)
  • ignored_files: an array of files to ignore e.g. ['ignore_me.js', %r(ignore_some/\d{32}\.css)] Useful if there are some files that are created dynamically on the server and you don't want to upload on deploy default: []
  • cache_asset_regexps: an array of files to add cache headers e.g. ['cache_me.js', %r(cache_some\.\d{8}\.css)] Useful if there are some files that are added to sprockets assets list and need to be set as 'Cacheable' on uploaded server. Only rails compiled regexp is matched internally default: []

Config Method add_local_file_paths

Adding local files by providing a block:

AssetSync.configure do |config|
  # The block should return an array of file paths
  config.add_local_file_paths do
    # Any code that returns paths of local asset files to be uploaded
    # Like Webpacker
    public_root = Rails.root.join("public")
    Dir.chdir(public_root) do
      packs_dir = Webpacker.config.public_output_path.relative_path_from(public_root)
      Dir[File.join(packs_dir, '/**/**')]
    end
  end
end

The blocks are run when local files are being scanned and uploaded

Config Method file_ext_to_mime_type_overrides

It's reported that mime-types 3.x returns application/ecmascript instead of application/javascript
Such change of mime type might cause some CDN to disable asset compression
So this gem has defined a default override for file ext js to be mapped to application/javascript by default

To customize the overrides:

AssetSync.configure do |config|
  # Clear the default overrides
  config.file_ext_to_mime_type_overrides.clear

  # Add/Edit overrides
  # Will call `#to_s` for inputs
  config.file_ext_to_mime_type_overrides.add(:js, :"application/x-javascript")
end

The blocks are run when local files are being scanned and uploaded

Fog (Required)

  • fog_provider: your storage provider AWS (S3) or Rackspace (Cloud Files) or Google (Google Storage) or AzureRM (Azure Blob) or Backblaze (Backblaze B2)
  • fog_directory: your bucket name

Fog (Optional)

  • fog_region: the region your storage bucket is in e.g. eu-west-1 (AWS), ord (Rackspace), japanwest (Azure Blob)
  • fog_path_style: To use buckets with dot in names, check fog/fog#2381 (comment)

AWS

  • aws_access_key_id: your Amazon S3 access key
  • aws_secret_access_key: your Amazon S3 access secret
  • aws_acl: set canned ACL of uploaded object, will override fog_public if set

Rackspace

  • rackspace_username: your Rackspace username
  • rackspace_api_key: your Rackspace API Key.

Google Storage

When using the JSON API

  • google_project: your Google Cloud Project name where the Google Cloud Storage bucket resides
  • google_json_key_location: path to the location of the service account key. The service account key must be a JSON type key

When using the S3 API

  • google_storage_access_key_id: your Google Storage access key
  • google_storage_secret_access_key: your Google Storage access secret

Azure Blob

  • azure_storage_account_name: your Azure Blob access key
  • azure_storage_access_key: your Azure Blob access secret

Backblaze B2

  • b2_key_id: Your Backblaze B2 key ID
  • b2_key_token: Your Backblaze B2 key token
  • b2_bucket_id: Your Backblaze B2 bucket ID

Rackspace (Optional)

  • rackspace_auth_url: Rackspace auth URL, for Rackspace London use: https://lon.identity.api.rackspacecloud.com/v2.0

Amazon S3 Multiple Region Support

If you are using anything other than the US buckets with S3 then you'll want to set the region. For example with an EU bucket you could set the following environment variable.

heroku config:add FOG_REGION=eu-west-1

Or via a custom initializer

AssetSync.configure do |config|
  # ...
  config.fog_region = 'eu-west-1'
end

Or via YAML

production:  # ...  fog_region: 'eu-west-1'

Amazon (AWS) IAM Users

Amazon has switched to the more secure IAM User security policy model. When generating a user & policy for asset_sync you must ensure the policy has the following permissions, or you'll see the error:

Expected(200) <=> Actual(403 Forbidden)

IAM User Policy Example with minimum require permissions (replace bucket_name with your bucket):

{
  "Statement": [
    {
      "Action": "s3:ListBucket",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::bucket_name"
    },
    {
      "Action": "s3:PutObject*",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::bucket_name/*"
    }
  ]
}

If you want to use IAM roles you must set config.aws_iam_roles = true in your initializers.

AssetSync.configure do |config|
  # ...
  config.aws_iam_roles = true
end

Automatic gzip compression

With the gzip_compression option enabled, when uploading your assets. If a file has a gzip compressed equivalent we will replace that asset with the compressed version and sets the correct headers for S3 to serve it. For example, if you have a file master.css and it was compressed to master.css.gz we will upload the .gz file to S3 in place of the uncompressed file.

If the compressed file is actually larger than the uncompressed file we will ignore this rule and upload the standard uncompressed version.

Fail Silently

With the fail_silently option enabled, when running rake assets:precompile AssetSync will never throw an error due to missing configuration variables.

With the new user_env_compile feature of Heroku (see above), this is no longer required or recommended. Yet was added for the following reasons:

With Rails 3.1 on the Heroku cedar stack, the deployment process automatically runs rake assets:precompile. If you are using ENV variable style configuration. Due to the methods with which Heroku compile slugs, there will be an error raised by asset_sync as the environment is not available. This causes heroku to install the rails31_enable_runtime_asset_compilation plugin which is not necessary when using asset_sync and also massively slows down the first incoming requests to your app.

To prevent this part of the deploy from failing (asset_sync raising a config error), but carry on as normal set fail_silently to true in your configuration and ensure to run heroku run rake assets:precompile after deploy.

Rake Task

A rake task is included within the asset_sync gem to perform the sync:

  namespace :assets do
    desc "Synchronize assets to S3"
    task :sync => :environment do
      AssetSync.sync
    end
  end

If AssetSync.config.run_on_precompile is true (default), then assets will be uploaded to S3 automatically after the assets:precompile rake task is invoked:

  if Rake::Task.task_defined?("assets:precompile:nondigest")
    Rake::Task["assets:precompile:nondigest"].enhance do
      Rake::Task["assets:sync"].invoke if defined?(AssetSync) && AssetSync.config.run_on_precompile
    end
  else
    Rake::Task["assets:precompile"].enhance do
      Rake::Task["assets:sync"].invoke if defined?(AssetSync) && AssetSync.config.run_on_precompile
    end
  end

You can disable this behavior by setting AssetSync.config.run_on_precompile = false.

Sinatra/Rack Support

You can use the gem with any Rack application, but you must specify two additional options; prefix and public_path.

AssetSync.configure do |config|
  config.fog_provider = 'AWS'
  config.fog_directory = ENV['FOG_DIRECTORY']
  config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
  config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
  config.prefix = 'assets'
  # Can be a `Pathname` or `String`
  # Will be converted into an `Pathname`
  # If relative, will be converted into an absolute path
  # via `::Rails.root` or `::Dir.pwd`
  config.public_path = Pathname('./public')
end

Then manually call AssetSync.sync at the end of your asset precompilation task.

namespace :assets do
  desc 'Precompile assets'
  task :precompile do
    target = Pathname('./public/assets')
    manifest = Sprockets::Manifest.new(sprockets, './public/assets/manifest.json')

    sprockets.each_logical_path do |logical_path|
      if (!File.extname(logical_path).in?(['.js', '.css']) || logical_path =~ /application\.(css|js)$/) && asset = sprockets.find_asset(logical_path)
        filename = target.join(logical_path)
        FileUtils.mkpath(filename.dirname)
        puts "Write asset: #{filename}"
        asset.write_to(filename)
        manifest.compile(logical_path)
      end
    end

    AssetSync.sync
  end
end

Webpacker (> 2.0) support

  1. Add webpacker files and disable run_on_precompile:
AssetSync.configure do |config|
  # Disable automatic run on precompile in order to attach to webpacker rake task
  config.run_on_precompile = false
  # The block should return an array of file paths
  config.add_local_file_paths do
    # Support webpacker assets
    public_root = Rails.root.join("public")
    Dir.chdir(public_root) do
      packs_dir = Webpacker.config.public_output_path.relative_path_from(public_root)
      Dir[File.join(packs_dir, '/**/**')]
    end
  end
end
  1. Add a asset_sync.rake in your lib/tasks directory that enhances the correct task, otherwise asset_sync runs before webpacker:compile does:
if defined?(AssetSync)
  Rake::Task['webpacker:compile'].enhance do
    Rake::Task["assets:sync"].invoke
  end
end

Caveat

By adding local files outside the normal Rails assets directory, the uploading part works, however checking that the asset was previously uploaded is not working because asset_sync is only fetching the files in the assets directory on the remote bucket. This will mean additional time used to upload the same assets again on every precompilation.

Running the specs

Make sure you have a .env file with these details:-

# for AWS provider
AWS_ACCESS_KEY_ID=<yourkeyid>
AWS_SECRET_ACCESS_KEY=<yoursecretkey>
FOG_DIRECTORY=<yourbucket>
FOG_REGION=<youbucketregion>

# for AzureRM provider
AZURE_STORAGE_ACCOUNT_NAME=<youraccountname>
AZURE_STORAGE_ACCESS_KEY=<youraccesskey>
FOG_DIRECTORY=<yourcontainer>
FOG_REGION=<yourcontainerregion>

Make sure the bucket has read/write permissions. Then to run the tests:-

foreman run rake

Todo

  1. Add some before and after filters for deleting and uploading
  2. Support more cloud storage providers
  3. Better test coverage
  4. Add rake tasks to clean old assets from a bucket

Credits

Inspired by:

License

MIT License. Copyright 2011-2013 Rumble Labs Ltd. rumblelabs.com


Author: AssetSync
Source code: https://github.com/AssetSync/asset_sync
License:

#ruby   #ruby-on-rails 

Asset Sync: Synchronises Assets Between Rails and S3
Nat  Grady

Nat Grady

1658591530

Simple Data Persistence for Your Electron App Or Module

electron-store

Simple data persistence for your Electron app or module - Save and load user preferences, app state, cache, etc

Electron doesn't have a built-in way to persist user preferences and other data. This module handles that for you, so you can focus on building your app. The data is saved in a JSON file named config.json in app.getPath('userData').

You can use this module directly in both the main and renderer process. For use in the renderer process only, you need to call Store.initRenderer() in the main process, or create a new Store instance (new Store()) in the main process.

Install

$ npm install electron-store

Requires Electron 11 or later.

Usage

const Store = require('electron-store');

const store = new Store();

store.set('unicorn', '🦄');
console.log(store.get('unicorn'));
//=> '🦄'

// Use dot-notation to access nested properties
store.set('foo.bar', true);
console.log(store.get('foo'));
//=> {bar: true}

store.delete('unicorn');
console.log(store.get('unicorn'));
//=> undefined

API

Changes are written to disk atomically, so if the process crashes during a write, it will not corrupt the existing config.

Store(options?)

Returns a new instance.

options

Type: object

defaults

Type: object

Default values for the store items.

Note: The values in defaults will overwrite the default key in the schema option.

schema

type: object

JSON Schema to validate your config data.

Under the hood, the JSON Schema validator ajv is used to validate your config. We use JSON Schema draft-07 and support all validation keywords and formats.

You should define your schema as an object where each key is the name of your data's property and each value is a JSON schema used to validate that property. See more here.

Example:

const Store = require('electron-store');

const schema = {
	foo: {
		type: 'number',
		maximum: 100,
		minimum: 1,
		default: 50
	},
	bar: {
		type: 'string',
		format: 'url'
	}
};

const store = new Store({schema});

console.log(store.get('foo'));
//=> 50

store.set('foo', '1');
// [Error: Config schema violation: `foo` should be number]

Note: The default value will be overwritten by the defaults option if set.

migrations

Type: object

You can use migrations to perform operations to the store whenever a version is upgraded.

The migrations object should consist of a key-value pair of 'version': handler. The version can also be a semver range.

Example:

const Store = require('electron-store');

const store = new Store({
	migrations: {
		'0.0.1': store => {
			store.set('debugPhase', true);
		},
		'1.0.0': store => {
			store.delete('debugPhase');
			store.set('phase', '1.0.0');
		},
		'1.0.2': store => {
			store.set('phase', '1.0.2');
		},
		'>=2.0.0': store => {
			store.set('phase', '>=2.0.0');
		}
	}
});

name

Type: string
Default: 'config'

Name of the storage file (without extension).

This is useful if you want multiple storage files for your app. Or if you're making a reusable Electron module that persists some data, in which case you should not use the name config.

cwd

Type: string
Default: app.getPath('userData')

Storage file location. Don't specify this unless absolutely necessary! By default, it will pick the optimal location by adhering to system conventions. You are very likely to get this wrong and annoy users.

If a relative path, it's relative to the default cwd. For example, {cwd: 'unicorn'} would result in a storage file in ~/Library/Application Support/App Name/unicorn.

encryptionKey

Type: string | Buffer | TypedArray | DataView
Default: undefined

Note that this is not intended for security purposes, since the encryption key would be easily found inside a plain-text Node.js app.

Its main use is for obscurity. If a user looks through the config directory and finds the config file, since it's just a JSON file, they may be tempted to modify it. By providing an encryption key, the file will be obfuscated, which should hopefully deter any users from doing so.

When specified, the store will be encrypted using the aes-256-cbc encryption algorithm.

fileExtension

Type: string
Default: 'json'

Extension of the config file.

You would usually not need this, but could be useful if you want to interact with a file with a custom file extension that can be associated with your app. These might be simple save/export/preference files that are intended to be shareable or saved outside of the app.

clearInvalidConfig

Type: boolean
Default: false

The config is cleared if reading the config file causes a SyntaxError. This is a good behavior for unimportant data, as the config file is not intended to be hand-edited, so it usually means the config is corrupt and there's nothing the user can do about it anyway. However, if you let the user edit the config file directly, mistakes might happen and it could be more useful to throw an error when the config is invalid instead of clearing.

serialize

Type: Function
Default: value => JSON.stringify(value, null, '\t')

Function to serialize the config object to a UTF-8 string when writing the config file.

You would usually not need this, but it could be useful if you want to use a format other than JSON.

deserialize

Type: Function
Default: JSON.parse

Function to deserialize the config object from a UTF-8 string when reading the config file.

You would usually not need this, but it could be useful if you want to use a format other than JSON.

accessPropertiesByDotNotation

Type: boolean
Default: true

Accessing nested properties by dot notation. For example:

const Store = require('electron-store');

const store = new Store();

store.set({
	foo: {
		bar: {
			foobar: '🦄'
		}
	}
});

console.log(store.get('foo.bar.foobar'));
//=> '🦄'

Alternatively, you can set this option to false so the whole string would be treated as one key.

const store = new Store({accessPropertiesByDotNotation: false});

store.set({
	`foo.bar.foobar`: '🦄'
});

console.log(store.get('foo.bar.foobar'));
//=> '🦄'

watch

Type: boolean
Default: false

Watch for any changes in the config file and call the callback for onDidChange or onDidAnyChange if set. This is useful if there are multiple processes changing the same config file, for example, if you want changes done in the main process to be reflected in a renderer process.

Instance

You can use dot-notation in a key to access nested properties.

The instance is iterable so you can use it directly in a for…of loop.

.set(key, value)

Set an item.

The value must be JSON serializable. Trying to set the type undefined, function, or symbol will result in a TypeError.

.set(object)

Set multiple items at once.

.get(key, defaultValue?)

Get an item or defaultValue if the item does not exist.

.reset(...keys)

Reset items to their default values, as defined by the defaults or schema option.

Use .clear() to reset all items.

.has(key)

Check if an item exists.

.delete(key)

Delete an item.

.clear()

Delete all items.

This resets known items to their default values, if defined by the defaults or schema option.

.onDidChange(key, callback)

callback: (newValue, oldValue) => {}

Watches the given key, calling callback on any changes.

When a key is first set oldValue will be undefined, and when a key is deleted newValue will be undefined.

Returns a function which you can use to unsubscribe:

const unsubscribe = store.onDidChange(key, callback);

unsubscribe();

.onDidAnyChange(callback)

callback: (newValue, oldValue) => {}

Watches the whole config object, calling callback on any changes.

oldValue and newValue will be the config object before and after the change, respectively. You must compare oldValue to newValue to find out what changed.

Returns a function which you can use to unsubscribe:

const unsubscribe = store.onDidAnyChange(callback);

unsubscribe();

.size

Get the item count.

.store

Get all the data as an object or replace the current data with an object:

const Store = require('electron-store');

const store = new Store();

store.store = {
	hello: 'world'
};

.path

Get the path to the storage file.

.openInEditor()

Open the storage file in the user's editor.

initRenderer()

Initializer to set up the required ipc communication channels for the module when a Store instance is not created in the main process and you are creating a Store instance in the Electron renderer process only.

In the main process:

const Store = require('electron-store');

Store.initRenderer();

And in the renderer process:

const Store = require('electron-store');

const store = new Store();

store.set('unicorn', '🦄');
console.log(store.get('unicorn'));
//=> '🦄'

FAQ

Advantages over window.localStorage

Can I use YAML or another serialization format?

The serialize and deserialize options can be used to customize the format of the config file, as long as the representation is compatible with utf8 encoding.

Example using YAML:

const Store = require('electron-store');
const yaml = require('js-yaml');

const store = new Store({
	fileExtension: 'yaml',
	serialize: yaml.safeDump,
	deserialize: yaml.safeLoad
});

How do I get store values in the renderer process when my store was initialized in the main process?

The store is not a singleton, so you will need to either initialize the store in a file that is imported in both the main and renderer process, or you have to pass the values back and forth as messages. Electron provides a handy invoke/handle API that works well for accessing these values.

ipcMain.handle('getStoreValue', (event, key) => {
	return store.get(key);
});
const foo = await ipcRenderer.invoke('getStoreValue', 'foo');

Can I use it for large amounts of data?

This package is not a database. It simply uses a JSON file that is read/written on every change. Prefer using it for smaller amounts of data like user settings, value caching, state, etc.

If you need to store large blobs of data, I recommend saving it to disk and to use this package to store the path to the file instead.

Related

Author: Sindresorhus
Source Code: https://github.com/sindresorhus/electron-store 
License: MIT license

#electron #config 

Simple Data Persistence for Your Electron App Or Module
Rocio  O'Keefe

Rocio O'Keefe

1657399620

Coder0211: Config Files for My GitHub Profile

Coder0211

This is a package with functions and widgets to make app development faster and more convenient, currently it is developed by one developer.

Support

1. Base screen

[BaseScreen] is a base class for all screens in the app.

  • It provides some useful methods for screens.
  • Every screen should extend this class.
  • Example:
class MyHomePage extends BaseScreen {
     const MyHomePage({Key? key}) : super(key: key);
     @override
     State<MyHomePage> createState() => _MyHomePageState();
}

[BaseScreenState] is a base class for all screen states in the app.

  • It provides the [store].
  • AutomaticKeepAliveClientMixin is used to keep the screen alive when the user navigates to another screen.
  • It also provides the [initState] method to initialize the [store] instance.
  • It also provides the [build] method to build the screen.
  • Every screen state should extend this class.
  • [BaseScreenState] is a stateful widget.
  • Example:
class _MyHomePageState extends BaseScreenState<MyHomePage, MainStore> {
     @override
     Widget build(BuildContext context) {
       super.build(context);
       return Scaffold(
         appBar: AppBar(),
         body: Container()
      );
    }
 }

2. Store

State managements

/// Clean before updating:
///    flutter packages pub run build_runner watch --delete-conflicting-outputs

part 'example_store.g.dart';

class ExampleStore = _ExampleStore with _$ExampleStore;

abstract class _ExampleStore with Store, BaseStoreMixin {
  @override
  void onInit() {}

  @override
  void onDispose() {}

  @override
  Future<void> onWidgetBuildDone() async {}

  @override
  void resetValue() {}

  //... Some values and actions
}

3. BaseAPI

[BaseDataAPI] - Base Class for handling API

[fetchData] is fetch data from API

  • Param [url] is url of API without domain
  • Param [params] is params of API with key and value
  • Param [body] is body of API with key and value
  • Param [headers] is headers of API with key and value
  • Return [BaseDataAPI] is object of BaseDataAPI with object and apiStatus
 return BaseDataAPI(object: response.data, apiStatus:ApiStatus.SUCCEEDED);
  • Example:
@action
  Future<void> getData() async {
    BaseAPI _api = BaseAPI();
    BaseAPI.domain = 'https://example.com';
    await _api.fetchData('/data', method: ApiMethod.GET).then((value) {
      switch (value.apiStatus) {
        case ApiStatus.SUCCEEDED:
          printLogSusscess('SUCCEEDED');
          // Handle success response here
          break;
        case ApiStatus.INTERNET_UNAVAILABLE:
          printLogYellow('INTERNET_UNAVAILABLE');
          BaseUtils.showToast('INTERNET UNAVAILABLE', bgColor: Colors.red);
          break;
        default:
          printLogError('FAILED');
          // Handle failed response here
          break;
      }
    });
  }

4. BaseSharedPreferences

[BaseSharedPreferences] is a base class for all shared preferences

    String value = '';
    if(await BaseSharedPreferences.containKey('KEY')){
        value = await BaseSharedPreferences.getStringValue('KEY');
    }

5. BaseNavigation

[push] Push a route to the navigator

  • Param [context] The context to push the route to
  • Param [routeName] The routeName to push
  • Param [clearStack] Clear the stack before pushing the
  • Example:
BaseNavigation.push(context, routeName: '/', clearStack: true);

[getArgs] Get the arguments from the current route

  • Param [context] The context to get the arguments
  • Param [key] The key to get the arguments
  • Example:
BaseNavigation.getArgs(context, key: 'id');

6. TextEX

  • USING : <String>.<Name()>
  • Example:
'Hello'.d1()
hoặc
S.current.splash_screen_title.d1(color: AppColors.whiteText)

7. DoubleEX

  • USING : <Double>.<Name()>
  • Example:
10.0.r(context)

Installing

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add coder0211

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  coder0211: ^0.0.57

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:coder0211/coder0211.dart';

example/example.dart

import 'package:coder0211/coder0211.dart';
import 'package:flutter/material.dart';
import 'package:provider/provider.dart';
import 'package:flutter_mobx/flutter_mobx.dart';
import 'package:mobx/mobx.dart';

part 'example_store.g.dart';

void main() {
  runApp(const MyApp());
}

/// Clean before updating:
///    flutter packages pub run build_runner watch --delete-conflicting-outputs
class ExampleStore = _ExampleStore with _$ExampleStore;

abstract class _ExampleStore with Store, BaseStoreMixin {
  @override
  void onInit() {}

  @override
  void onDispose() {}

  @override
  Future<void> onWidgetBuildDone() async {}

  @override
  void resetValue() {}

  @observable
  int _ic = 0;

  int get ic => _ic;

  set ic(int ic) {
    _ic = ic;
  }

  List<int> lists = [];

  @observable
  ObservableList<int> listsInt = ObservableList<int>();

  @action
  void ice() {
    ic++;
  }

  @action
  Future<void> getData() async {
    BaseAPI _api = BaseAPI();
    BaseAPI.domain = 'https://example.com';
    await _api.fetchData('/data', method: ApiMethod.GET).then((value) {
      switch (value.apiStatus) {
        case ApiStatus.SUCCEEDED:
          printLogSusscess('SUCCEEDED');
          // Handle success response here
          break;
        case ApiStatus.INTERNET_UNAVAILABLE:
          printLogYellow('INTERNET_UNAVAILABLE');
          BaseUtils.showToast('INTERNET UNAVAILABLE', bgColor: Colors.red);
          break;
        default:
          printLogError('FAILED');
          // Handle failed response here
          break;
      }
    });
  }
}

class MyApp extends StatelessWidget {
  const MyApp({Key? key}) : super(key: key);
  @override
  Widget build(BuildContext context) {
    return MultiProvider(
      providers: [Provider<ExampleStore>(create: (context) => ExampleStore())],
      child: const MaterialApp(
        home: Example(),
      ),
    );
  }
}

class Example extends BaseScreen {
  const Example({Key? key}) : super(key: key);

  @override
  State<Example> createState() => _ExampleState();
}

class _ExampleState extends BaseScreenState<Example, ExampleStore> {
  @override
  Widget build(BuildContext context) {
    super.build(context);
    return Scaffold(
      appBar: AppBar(),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            Container(
              width: 1.0.w(context),
            ),
            Observer(builder: (_) {
              return Text(
                '${store.ic}',
                style: Theme.of(context).textTheme.headline4,
              );
            }),
            'Hello'.d1()
          ],
        ),
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: () {
          store.ice();
        },
        tooltip: 'Increment',
        child: const Icon(Icons.add),
      ),
    );
  }
}

Author: Coder0211
Source Code: https://github.com/coder0211/coder0211 
License: MIT license

#flutter #dart #config 

Coder0211: Config Files for My GitHub Profile

Yaml-config: Yaml Configuration Support for CakePHP 3

Yaml for CakePHP 3

implements most of the YAML 1.2 specification using Symfony Yaml Component to CakePHP 3 for parsing config files

Requirements

The 3.0 branch has the following requirements:

  • CakePHP 3.0.0 or greater.

Installation

  • Install the plugin with composer from your CakePHP Project's ROOT directory (where composer.json file is located)
php composer.phar require chobo1210/yaml "dev-master"

OR add this lines to your composer.json

"require": {
  "chobo1210/Yaml": "dev-master"
}

And run php composer.phar update

then add this lines to your config/bootstrap.php

use Yaml\Configure\Engine\YamlConfig;

try {
  Configure::config('yaml', new YamlConfig());
  Configure::load('your_file', 'yaml');
} catch (\Exception $e) {
  die('Unable to load config/your_file.yml.');
}

your file must be in the config/ directory replace your_file with the name of your YAML file without the extension

Usage

if you want to use the plugin Shell to convert your current config add the lines to config/bootstrap.php

Plugin::load('Yaml');

then in your Console or Terminal type :

bin/cake convert

Optionally you can set the name of your YAML file ( without the extension ) default is : app.yml

bin/cake convert your_file_name

the file will be create in your config/ directory.

Example

debug: true

Asset:
  timestamp: true

# Default Database Connection informations
default_configuration: &default_configuration
  className: Cake\Database\Connection
  driver: Cake\Database\Driver\Mysql
  persistent: false
  host: localhost
  login: root
  password: secret
  prefix: false
  encoding: utf8
  timezone: +01:00
  cacheMetadata: true
  quoteIdentifiers: false  



# Datasources
Datasources:
  # Default datasource
  default: 
    <<: *default_configuration
    database: my_database
  # PHPUnit tests datasource
  test:
    <<: *default_configuration
    database: my_database_test



# Email Configuration
EmailTransport:
  default:
      className: Mail
      host: localhost
      port: 1025
      timeout: 30
      # username: user
      # password: secret
      client: null
      tls: null
Email:
  default:
      transport: default
      from: contact@localhost.dz
      cherset: utf-8
      headerCharset: utf-8   

Author: Guemidiborhane
Source Code: https://github.com/guemidiborhane/yaml-config 
License: 

#php #yaml #config #cakephp 

Yaml-config: Yaml Configuration Support for CakePHP 3
Hermann  Frami

Hermann Frami

1656826320

Stack Config Plugin for Serverless

Stack Config Plugin for Serverless

A serverless plugin to manage configurations for a micro-service stack.

Features

outputs - This downloads this service's outputs to a file in /PROJECT_ROOT/.serverless/stack-outputs.json and updates the config file in S3.

outputs download - This downloads the existing, combined, stack config file from S3.

Install

npm install --save serverless-plugin-stack-config

Usage

Add the plugin to your serverless.yml like the following:

NOTE: The script and backup properties are optional.

serverless.yml:

provider:
...

plugins:
  - serverless-plugin-stack-config

custom:
  stack-config:
    script: scripts/transform.js
    backup:
      s3:
        key: config/stack-config.json
        bucket: ${self:service}-${opt:env}
        shallow: true

functions:
...
resources:
...

Configure the Stack Output

You can now supply a script that you can use to transform the stack outputs before they are saved to file.

For example you could rename outputs or create new ones from the values received.

// scripts/transform.js

module.exports = async function transform(serverless, stackOutputs) {
  // rename
  stackOutputs.TrackingServiceEndpoint = stackOutputs.ServiceEndpoint;

  // delete
  delete stackOutputs.ServerlessDeploymentBucketName;
  delete stackOutputs.ServiceEndpoint;

  // return updated stack
  return stackOutputs;
}

Example shell commands:

serverless outputs --stage dev --region eu-west-1

serverless outputs download --stage dev --region eu-west-1
# with save directory location
serverless outputs download --stage dev --region eu-west-1 --path .

Limitations

If you are deploying several applications at the same time, there is the possibility that some data loss could occur in the event that multiple stacks are updating the config file in S3.

Author: Rawphp
Source Code: https://github.com/rawphp/serverless-plugin-stack-config 
License: MIT license

#serverless #stack #config 

Stack Config Plugin for Serverless
Oral  Brekke

Oral Brekke

1655011380

Dot: My Dotfiles and Setup Scripts

Gib's Dotfiles

Contains everything I use to setup a new machine (except ssh and gpg keys).

How to set up a new machine

N.B. until I add better control over ordering, on the first run it is necessary to clone the wrk_dotfile_dir before running up for the first time.

curl --create-dirs -Lo ~/bin/up https://github.com/gibfahn/up-rs/releases/latest/download/up-darwin
chmod +x ~/bin/up
~/bin/up run -bf gibfahn/dot

Then see manual.md.

Manual setup

If you just want to update your dotfile symlinks, you can run:

./link

Dotfiles are pretty personal, so feel free to adapt this repo as you wish. If you make a change feel free to send a Pull Request, you might fix something for me!

Adding a new file to your dotfiles

As long as it goes in $HOME, just put it in the same relative directory inside ./dotfiles/ (so ~/.bashrc becomes dotfiles/.bashrc). If you rerun link it should get symlinked into the right place.

Author: Gibfahn
Source Code: https://github.com/gibfahn/dot 
License: 

#node #nodejs #config #macos #linux #dotfiles 

Dot: My Dotfiles and Setup Scripts
Hermann  Frami

Hermann Frami

1654510680

Serverless Import Config Plugin

Serverless Import Config Plugin

Split your serverless.yaml config file into smaller modules and import them.

By using this plugin you can build your serverless config from smaller parts separated by functionalities. Imported config is merged, so all keys are supported and lists are concatenated (without duplicates).

Works on importing yaml files by path or node module, especially useful in multi-package repositories.

Installation

Install with npm:

npm install --save-dev serverless-import-config-plugin

And then add the plugin to your serverless.yml file:

plugins:  - serverless-import-config-plugin

Usage

Specify config files to import in custom.import list:

custom:  import:    - ./path/to/serverless.yml # path to YAML file with serverless config    - ./path/to/dir # directory where serverless.yml can be find    - module-name # node module where serverless.yml can be find    - '@myproject/users-api' # monorepo package with serverless.yml config file    - module-name/custom-serverless.yml # path to custom config file of a node module

custom.import can be also a string, when only one file needs to be imported:

custom:  import: '@myproject/users-api'

Relative paths

All function handler paths are automatically prefixed by the imported config directory.

functions:  postOrder:    handler: functions/postOrder.handler # relative to the imported config

For other fields you need to use ${dirname} variable manually. ${dirname} points to a directory of imported config file.

custom:  webpack:    webpackConfig: ${dirname}/webpack.config.js

Customizable boilerplates

In case you want to customize imported config in more dynamic way, provide it as javascript file (serverless.js).

module.exports = ({ name, schema }) => ({  provider: {    iamRoleStatements: [      // ...    ],  },  // ...})

You can pass arguments to the imported file using module and inputs fields:

custom:  import:    - module: '@myproject/aws-dynamodb' # can be also a path to js file      inputs:        name: custom-table        schema:          # ...

Author: KrysKruk
Source Code: https://github.com/KrysKruk/serverless-import-config-plugin 
License: MIT license

#serverless #plugin #config 

Serverless Import Config Plugin
Nigel  Uys

Nigel Uys

1654224540

RK-grpc: gRPC Related Entry

rk-grpc  

Middlewares & bootstrapper designed for gRPC and grpc-gateway

This belongs to rk-boot family. We suggest use this lib from rk-boot.

image

Architecture

image

Supported bootstrap

BootstrapDescription
YAML basedStart gRPC and grpc-gateway microservice from YAML
Code basedStart gRPC and grpc-gateway microservice from code

Supported instances

All instances could be configured via YAML or Code.

User can enable anyone of those as needed! No mandatory binding!

InstanceDescription
gRPCgRPC defined with protocol buffer.
gRPC proxyProxy gRPC request to another gRPC server.
grpc-gatewaygrpc-gateway service with same port.
grpc-gateway optionsWell defined grpc-gateway options.
ConfigConfigure spf13/viper as config instance and reference it from YAML
LoggerConfigure uber-go/zap logger configuration and reference it from YAML
EventConfigure logging of RPC with rk-query and reference it from YAML
CertFetch TLS/SSL certificates from remote datastore like ETCD and start microservice.
PrometheusStart prometheus client at client side and push metrics to pushgateway as needed.
SwaggerBuiltin swagger UI handler.
DocsBuiltin RapiDoc instance which can be used to replace swagger and RK TV.
CommonServiceList of common APIs.
StaticFileHandlerA Web UI shows files could be downloaded from server, currently support source of local and embed.FS.
PProfPProf web UI.

Supported middlewares

All middlewares could be configured via YAML or Code.

User can enable anyone of those as needed! No mandatory binding!

MiddlewareDescription
MetricsCollect RPC metrics and export to prometheus client.
LogLog every RPC requests as event with rk-query.
TraceCollect RPC trace and export it to stdout, file or jaeger with open-telemetry/opentelemetry-go.
PanicRecover from panic for RPC requests and log it.
MetaSend micsroservice metadata as header to client.
AuthSupport [Basic Auth] and [API Key] authorization types.
RateLimitLimiting RPC rate globally or per path.
TimeoutTiming out request by configuration.
CORSServer side CORS validation.
JWTServer side JWT validation.
SecureServer side secure validation.
CSRFServer side csrf validation.

Installation

go get github.com/rookie-ninja/rk-grpc/v2

Quick Start

In the bellow example, we will start microservice with bellow functionality and middlewares enabled via YAML.

  • gRPC and grpc-gateway server
  • gRPC server reflection
  • Swagger UI
  • CommonService
  • Docs
  • Prometheus Metrics (middleware)
  • Logging (middleware)
  • Meta (middleware)

Please refer example at example/boot/simple.

1.Prepare .proto files

  • api/v1/greeter.proto
syntax = "proto3";

package api.v1;

option go_package = "api/v1/greeter";

service Greeter {
  rpc Greeter (GreeterRequest) returns (GreeterResponse) {}
}

message GreeterRequest {
  bytes msg = 1;
}

message GreeterResponse {}
  • api/v1/gw_mapping.yaml
type: google.api.Service
config_version: 3

# Please refer google.api.Http in https://github.com/googleapis/googleapis/blob/master/google/api/http.proto file for details.
http:
  rules:
    - selector: api.v1.Greeter.Greeter
      get: /v1/greeter
  • buf.yaml
version: v1beta1
name: github.com/rk-dev/rk-boot
build:
  roots:
    - api
  • buf.gen.yaml
version: v1beta1
plugins:
  # protoc-gen-go needs to be installed, generate go files based on proto files
  - name: go
    out: api/gen
    opt:
     - paths=source_relative
  # protoc-gen-go-grpc needs to be installed, generate grpc go files based on proto files
  - name: go-grpc
    out: api/gen
    opt:
      - paths=source_relative
      - require_unimplemented_servers=false
  # protoc-gen-grpc-gateway needs to be installed, generate grpc-gateway go files based on proto files
  - name: grpc-gateway
    out: api/gen
    opt:
      - paths=source_relative
      - grpc_api_configuration=api/v1/gw_mapping.yaml
  # protoc-gen-openapiv2 needs to be installed, generate swagger config files based on proto files
  - name: openapiv2
    out: api/gen
    opt:
      - grpc_api_configuration=api/v1/gw_mapping.yaml

2.Generate .pb.go files with buf

$ buf generate --path api/v1
  • directory hierarchy
.
├── api
│   ├── gen
│   │   └── v1
│   │       ├── greeter.pb.go
│   │       ├── greeter.pb.gw.go
│   │       ├── greeter.swagger.json
│   │       └── greeter_grpc.pb.go
│   └── v1
│       ├── greeter.proto
│       └── gw_mapping.yaml
├── boot.yaml
├── buf.gen.yaml
├── buf.yaml
├── go.mod
├── go.sum
└── main.go

3.Create boot.yaml

---
grpc:
  - name: greeter                     # Required
    port: 8080                        # Required
    enabled: true                     # Required
    enableReflection: true            # Optional, default: false
    enableRkGwOption: true            # Optional, default: false
    commonService:
      enabled: true                   # Optional, default: false
    docs:
      enabled: true                   # Optional, default: false
    sw:
      enabled: true                   # Optional, default: false
    prom:
      enabled: true                   # Optional, default: false
    middleware:
      logging:
        enabled: true                 # Optional, default: false
      prom:
        enabled: true                 # Optional, default: false
      meta:
        enabled: true                 # Optional, default: false

4.Create main.go

// Copyright (c) 2021 rookie-ninja
//
// Use of this source code is governed by an Apache-style
// license that can be found in the LICENSE file.
package main

import (
  "context"
  "embed"
  _ "embed"
  "github.com/rookie-ninja/rk-entry/v2/entry"
  "github.com/rookie-ninja/rk-grpc/v2/boot"
  proto "github.com/rookie-ninja/rk-grpc/v2/example/boot/simple/api/gen/v1"
  "google.golang.org/grpc"
)

//go:embed boot.yaml
var boot []byte

//go:embed api/gen/v1
var docsFS embed.FS

//go:embed api/gen/v1
var staticFS embed.FS

func init() {
  rkentry.GlobalAppCtx.AddEmbedFS(rkentry.DocsEntryType, "greeter", &docsFS)
  rkentry.GlobalAppCtx.AddEmbedFS(rkentry.SWEntryType, "greeter", &docsFS)
  rkentry.GlobalAppCtx.AddEmbedFS(rkentry.StaticFileHandlerEntryType, "greeter", &staticFS)
}

func main() {
  // Bootstrap basic entries from boot config.
  rkentry.BootstrapPreloadEntryYAML(boot)

  // Bootstrap grpc entry from boot config
  res := rkgrpc.RegisterGrpcEntryYAML(boot)

  // Get GrpcEntry
  grpcEntry := res["greeter"].(*rkgrpc.GrpcEntry)
  // Register gRPC server
  grpcEntry.AddRegFuncGrpc(func(server *grpc.Server) {
    proto.RegisterGreeterServer(server, &GreeterServer{})
  })
  // Register grpc-gateway func
  grpcEntry.AddRegFuncGw(proto.RegisterGreeterHandlerFromEndpoint)

  // Bootstrap grpc entry
  grpcEntry.Bootstrap(context.Background())

  // Wait for shutdown signal
  rkentry.GlobalAppCtx.WaitForShutdownSig()

  // Interrupt gin entry
  grpcEntry.Interrupt(context.Background())
}

// GreeterServer Implementation of GreeterServer.
type GreeterServer struct{}

// Greeter Handle Greeter method.
func (server *GreeterServer) Greeter(context.Context, *proto.GreeterRequest) (*proto.GreeterResponse, error) {
  return &proto.GreeterResponse{}, nil
}

5.Start server

$ go run main.go

6.Validation

6.1 gRPC & grpc-gateway server

Try to test gRPC & grpc-gateway Service with curl & grpcurl

# Curl to common service
$ curl localhost:8080/rk/v1/ready
{"ready":true}

6.2 Swagger UI

Please refer sw section at Full YAML.

By default, we could access swagger UI at http://localhost:8080/sw

sw

4.3 Docs UI

Please refer docs section at Full YAML.

By default, we could access docs UI at http://localhost:8080/docs

docs

6.4 Prometheus Metrics

Please refer middleware.prom section at Full YAML.

By default, we could access prometheus client at http://localhost:8080/metrics

prom

6.5 Logging

Please refer middleware.logging section at Full YAML.

By default, we enable zap logger and event logger with encoding type of [console]. Encoding type of [json] and [flatten] is also supported.

2021-12-28T05:36:21.561+0800    INFO    boot/grpc_entry.go:1515 Bootstrap grpcEntry     {"eventId": "db2c977c-e0ff-4b21-bc0d-5966f1cad093", "entryName": "greeter"}
------------------------------------------------------------------------
endTime=2021-12-28T05:36:21.563575+08:00
startTime=2021-12-28T05:36:21.561362+08:00
elapsedNano=2213846
timezone=CST
ids={"eventId":"db2c977c-e0ff-4b21-bc0d-5966f1cad093"}
app={"appName":"rk","appVersion":"","entryName":"greeter","entryType":"GrpcEntry"}
env={"arch":"amd64","az":"*","domain":"*","hostname":"lark.local","localIP":"10.8.0.2","os":"darwin","realm":"*","region":"*"}
payloads={"commonServiceEnabled":true,"commonServicePathPrefix":"/rk/v1/","grpcPort":8080,"gwPort":8080,"promEnabled":true,"promPath":"/metrics","promPort":8080,"swEnabled":true,"swPath":"/sw/","tvEnabled":true,"tvPath":"/rk/v1/tv/"}
error={}
counters={}
pairs={}
timing={}
remoteAddr=localhost
operation=Bootstrap
resCode=OK
eventStatus=Ended
EOE

6.6 Meta

Please refer meta section at Full YAML.

By default, we will send back some metadata to client with headers.

$ curl -vs localhost:8080/rk/v1/ready
...
< HTTP/1.1 200 OK
< Content-Type: application/json
< X-Request-Id: 7e4f5ac5-3369-485f-89f7-55551cc4a9a1
< X-Rk-App-Name: rk
< X-Rk-App-Unix-Time: 2021-12-28T05:39:50.508328+08:00
< X-Rk-App-Version: 
< X-Rk-Received-Time: 2021-12-28T05:39:50.508328+08:00
< Date: Mon, 27 Dec 2021 21:39:50 GMT
...

6.7 Send request

We registered /v1/greeter API in grpc-gateway server and let's validate it!

$ curl -vs localhost:8080/v1/greeter             
*   Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 8080 failed: Connection refused
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /v1/greeter HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Content-Type: application/json
< X-Request-Id: 07b0fbf6-cebf-40ac-84a2-533bbd4b8958
< X-Rk-App-Name: rk
< X-Rk-App-Unix-Time: 2021-12-28T05:41:04.653652+08:00
< X-Rk-App-Version: 
< X-Rk-Received-Time: 2021-12-28T05:41:04.653652+08:00
< Date: Mon, 27 Dec 2021 21:41:04 GMT
< Content-Length: 2
< 
* Connection #0 to host localhost left intact
{}

We registered api.v1.Greeter.Greeter API in gRPC server and let's validate it!

$ grpcurl -plaintext localhost:8080 api.v1.Greeter.Greeter 
{}

6.8 RPC logs

Bellow logs would be printed in stdout.

The first block of log is from grpc-gateway request.

The second block of log is from gRPC request.

------------------------------------------------------------------------
endTime=2021-12-28T05:45:52.986041+08:00
startTime=2021-12-28T05:45:52.985956+08:00
elapsedNano=85065
timezone=CST
ids={"eventId":"88362f69-7eda-4f03-bdbe-7ef667d06bac","requestId":"88362f69-7eda-4f03-bdbe-7ef667d06bac"}
app={"appName":"rk","appVersion":"","entryName":"greeter","entryType":"GrpcEntry"}
env={"arch":"amd64","az":"*","domain":"*","hostname":"lark.local","localIP":"10.8.0.2","os":"darwin","realm":"*","region":"*"}
payloads={"grpcMethod":"Greeter","grpcService":"api.v1.Greeter","grpcType":"unaryServer","gwMethod":"GET","gwPath":"/v1/greeter","gwScheme":"http","gwUserAgent":"curl/7.64.1"}
error={}
counters={}
pairs={}
timing={}
remoteAddr=127.0.0.1:61520
operation=/api.v1.Greeter/Greeter
resCode=OK
eventStatus=Ended
EOE
------------------------------------------------------------------------
endTime=2021-12-28T05:44:45.686734+08:00
startTime=2021-12-28T05:44:45.686592+08:00
elapsedNano=141716
timezone=CST
ids={"eventId":"7765862c-9e83-443a-a6e5-bb28f17f8ea0","requestId":"7765862c-9e83-443a-a6e5-bb28f17f8ea0"}
app={"appName":"rk","appVersion":"","entryName":"greeter","entryType":"GrpcEntry"}
env={"arch":"amd64","az":"*","domain":"*","hostname":"lark.local","localIP":"10.8.0.2","os":"darwin","realm":"*","region":"*"}
payloads={"grpcMethod":"Greeter","grpcService":"api.v1.Greeter","grpcType":"unaryServer","gwMethod":"","gwPath":"","gwScheme":"","gwUserAgent":""}
error={}
counters={}
pairs={}
timing={}
remoteAddr=127.0.0.1:57149
operation=/api.v1.Greeter/Greeter
resCode=OK
eventStatus=Ended
EOE

6.9 RPC prometheus metrics

Prometheus client will automatically register into grpc-gateway instance at /metrics.

Access http://localhost:8080/metrics

image

YAML options

User can start multiple gRPC and grpc-gateway instances at the same time. Please make sure use different port and name.

gRPC Service

namedescriptiontypedefault value
grpc.nameRequired, The name of gRPC serverstringN/A
grpc.enabledRequired, Enable gRPC entryboolfalse
grpc.portRequired, The port of gRPC serverintegernil, server won't start
grpc.descriptionOptional, Description of gRPC entry.string""
grpc.enableReflectionOptional, Enable gRPC server reflectionbooleanfalse
grpc.enableRkGwOptionOptional, Enable RK style grpc-gateway server options. detailfalse 
grpc.noRecvMsgSizeLimitOptional, Disable gRPC server side receive message size limitfalse 
grpc.certEntryOptional, Reference of certEntry declared in cert entrystring""
grpc.loggerEntryOptional, Reference of loggerEntry declared in LoggerEntrystring""
grpc.eventEntryOptional, Reference of eventLEntry declared in eventEntrystring""

gRPC gateway options

Please refer to bellow repository for detailed explanations.

namedescriptiontypedefault value
grpc.gwOption.marshal.multilineOptional, Enable multiline in grpc-gateway marshallerboolfalse
grpc.gwOption.marshal.emitUnpopulatedOptional, Enable emitUnpopulated in grpc-gateway marshallerboolfalse
grpc.gwOption.marshal.indentOptional, Set indent in grpc-gateway marshallerstring" "
grpc.gwOption.marshal.allowPartialOptional, Enable allowPartial in grpc-gateway marshallerboolfalse
grpc.gwOption.marshal.useProtoNamesOptional, Enable useProtoNames in grpc-gateway marshallerboolfalse
grpc.gwOption.marshal.useEnumNumbersOptional, Enable useEnumNumbers in grpc-gateway marshallerboolfalse
grpc.gwOption.unmarshal.allowPartialOptional, Enable allowPartial in grpc-gateway unmarshalerboolfalse
grpc.gwOption.unmarshal.discardUnknownOptional, Enable discardUnknown in grpc-gateway unmarshalerboolfalse

Common Service

PathDescription
/rk/v1/gcTrigger GC
/rk/v1/readyGet application readiness status.
/rk/v1/aliveGet application aliveness status.
/rk/v1/infoGet application and process info.
namedescriptiontypedefault value
gin.commonService.enabledOptional, Enable builtin common servicebooleanfalse
gin.commonService.pathPrefixOptional, Provide path prefixstring/rk/v1

Swagger

namedescriptiontypedefault value
grpc.sw.enabledOptional, Enable swagger service over gin serverbooleanfalse
grpc.sw.pathOptional, The path access swagger service from webstring/sw
grpc.sw.jsonPathOptional, Where the swagger.json files are stored locallystring""
grpc.sw.headersOptional, Headers would be sent to caller as scheme of [key:value][]string[]

Docs (RapiDoc)

namedescriptiontypedefault value
grpc.docs.enabledOptional, Enable RapiDoc service over gin serverbooleanfalse
grpc.docs.pathOptional, The path access docs service from webstring/docs
grpc.docs.jsonPathOptional, Where the swagger.json or open API files are stored locallystring""
grpc.docs.headersOptional, Headers would be sent to caller as scheme of [key:value][]string[]
grpc.docs.style.themeOptional, light and dark are supported optionsstring[]
grpc.docs.debugOptional, Enable debugging mode in RapiDoc which can be used as the same as Swagger UIbooleanfalse

Prom Client

namedescriptiontypedefault value
grpc.prom.enabledOptional, Enable prometheusbooleanfalse
grpc.prom.pathOptional, Path of prometheusstring/metrics
grpc.prom.pusher.enabledOptional, Enable prometheus pusherboolfalse
grpc.prom.pusher.jobNameOptional, Job name would be attached as label while pushing to remote pushgatewaystring""
grpc.prom.pusher.remoteAddressOptional, PushGateWay address, could be form of http://x.x.x.x or x.x.x.xstring""
grpc.prom.pusher.intervalMsOptional, Push interval in millisecondsstring1000
grpc.prom.pusher.basicAuthOptional, Basic auth used to interact with remote pushgateway, form of [user:pass]string""
grpc.prom.pusher.certEntryOptional, Reference of rkentry.CertEntrystring""

Static file handler Service

namedescriptiontypedefault value
grpc.static.enabledOptional, Enable static file handlerbooleanfalse
grpc.static.pathOptional, path of static file handlerstring/static
grpc.static.sourceTypeRequired, local and pkger supportedstring""
grpc.static.sourcePathRequired, full path of source directorystring""
  • About embed.FS User has to set embedFS before Bootstrap() function as bellow:
  • ```go //go:embed /* var staticFS embed.FS

rkentry.GlobalAppCtx.AddEmbedFS(rkentry.StaticFileHandlerEntryType, "greeter", &staticFS)


### Middlewares
| name                   | description                                            | type     | default value |
|------------------------|--------------------------------------------------------|----------|---------------|
| grpc.middleware.ignore | The paths of prefix that will be ignored by middleware | []string | []            |

#### Logging
| name                                      | description                                            | type     | default value |
|-------------------------------------------|--------------------------------------------------------|----------|---------------|
| grpc.middleware.logging.enabled           | Enable log middleware                                  | boolean  | false         |
| grpc.middleware.logging.ignore            | The paths of prefix that will be ignored by middleware | []string | []            |
| grpc.middleware.logging.loggerEncoding    | json or console or flatten                             | string   | console       |
| grpc.middleware.logging.loggerOutputPaths | Output paths                                           | []string | stdout        |
| grpc.middleware.logging.eventEncoding     | json or console or flatten                             | string   | console       |
| grpc.middleware.logging.eventOutputPaths  | Output paths                                           | []string | false         |

We will log two types of log for every RPC call.
- Logger

Contains user printed logging with requestId or traceId.

- Event

Contains per RPC metadata, response information, environment information and etc.

| Field       | Description                                                                                                                                                                                                                                                                                                           |
|-------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| endTime     | As name described                                                                                                                                                                                                                                                                                                     |
| startTime   | As name described                                                                                                                                                                                                                                                                                                     |
| elapsedNano | Elapsed time for RPC in nanoseconds                                                                                                                                                                                                                                                                                   |
| timezone    | As name described                                                                                                                                                                                                                                                                                                     |
| ids         | Contains three different ids(eventId, requestId and traceId). If meta interceptor was enabled or event.SetRequestId() was called by user, then requestId would be attached. eventId would be the same as requestId if meta interceptor was enabled. If trace interceptor was enabled, then traceId would be attached. |
| app         | Contains [appName, appVersion](https://github.com/rookie-ninja/rk-entry#appinfoentry), entryName, entryType.                                                                                                                                                                                                          |
| env         | Contains arch, az, domain, hostname, localIP, os, realm, region. realm, region, az, domain were retrieved from environment variable named as REALM, REGION, AZ and DOMAIN. "*" means empty environment variable.                                                                                                      |
| payloads    | Contains RPC related metadata                                                                                                                                                                                                                                                                                         |
| error       | Contains errors if occur                                                                                                                                                                                                                                                                                              |
| counters    | Set by calling event.SetCounter() by user.                                                                                                                                                                                                                                                                            |
| pairs       | Set by calling event.AddPair() by user.                                                                                                                                                                                                                                                                               |
| timing      | Set by calling event.StartTimer() and event.EndTimer() by user.                                                                                                                                                                                                                                                       |
| remoteAddr  | As name described                                                                                                                                                                                                                                                                                                     |
| operation   | RPC method name                                                                                                                                                                                                                                                                                                       |
| resCode     | Response code of RPC                                                                                                                                                                                                                                                                                                  |
| eventStatus | Ended or InProgress                                                                                                                                                                                                                                                                                                   |

- example

```shell script
------------------------------------------------------------------------
endTime=2021-06-24T05:58:48.282193+08:00
startTime=2021-06-24T05:58:48.28204+08:00
elapsedNano=153005
timezone=CST
ids={"eventId":"573ce6a8-308b-4fc0-9255-33608b9e41d4","requestId":"573ce6a8-308b-4fc0-9255-33608b9e41d4"}
app={"appName":"rk-grpc","appVersion":"master-xxx","entryName":"greeter","entryType":"GrpcEntry"}
env={"arch":"amd64","az":"*","domain":"*","hostname":"lark.local","localIP":"10.8.0.6","os":"darwin","realm":"*","region":"*"}
payloads={"grpcMethod":"Healthy","grpcService":"rk.api.v1.RkCommonService","grpcType":"unaryServer","gwMethod":"GET","gwPath":"/rk/v1/healthy","gwScheme":"http","gwUserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36"}
error={}
counters={}
pairs={"healthy":"true"}
timing={}
remoteAddr=localhost:57135
operation=/rk.api.v1.RkCommonService/Healthy
resCode=OK
eventStatus=Ended
EOE

Prometheus

namedescriptiontypedefault value
grpc.middleware.prom.enabledEnable metrics middlewarebooleanfalse
grpc.middleware.prom.ignoreThe paths of prefix that will be ignored by middleware[]string[]

Auth

Enable the server side auth. codes.Unauthenticated would be returned to client if not authorized with user defined credential.

namedescriptiontypedefault value
grpc.middleware.auth.enabledEnable auth middlewarebooleanfalse
grpc.middleware.auth.ignoreThe paths of prefix that will be ignored by middleware[]string[]
grpc.middleware.auth.basicBasic auth credentials as scheme of user:pass[]string[]
grpc.middleware.auth.apiKeyAPI key auth[]string[]

Meta

Send application metadata as header to client.

namedescriptiontypedefault value
grpc.middleware.meta.enabledEnable meta middlewarebooleanfalse
grpc.middleware.meta.ignoreThe paths of prefix that will be ignored by middleware[]string[]
grpc.middleware.meta.prefixHeader key was formed as X--XXXstringRK

Trace

namedescriptiontypedefault value
grpc.middleware.trace.enabledEnable tracing middlewarebooleanfalse
grpc.middleware.trace.ignoreThe paths of prefix that will be ignored by middleware[]string[]
grpc.middleware.trace.exporter.file.enabledEnable file exporterbooleanfalse
grpc.middleware.trace.exporter.file.outputPathExport tracing info to filesstringstdout
grpc.middleware.trace.exporter.jaeger.agent.enabledExport tracing info to jaeger agentbooleanfalse
grpc.middleware.trace.exporter.jaeger.agent.hostAs name describedstringlocalhost
grpc.middleware.trace.exporter.jaeger.agent.portAs name describedint6831
grpc.middleware.trace.exporter.jaeger.collector.enabledExport tracing info to jaeger collectorbooleanfalse
grpc.middleware.trace.exporter.jaeger.collector.endpointAs name describedstringhttp://localhost:16368/api/trace
grpc.middleware.trace.exporter.jaeger.collector.usernameAs name describedstring""
grpc.middleware.trace.exporter.jaeger.collector.passwordAs name describedstring""

RateLimit

namedescriptiontypedefault value
grpc.middleware.rateLimit.enabledEnable rate limit middlewarebooleanfalse
grpc.middleware.rateLimit.ignoreThe paths of prefix that will be ignored by middleware[]string[]
grpc.middleware.rateLimit.algorithmProvide algorithm, tokenBucket and leakyBucket are available optionsstringtokenBucket
grpc.middleware.rateLimit.reqPerSecRequest per second globallyint0
grpc.middleware.rateLimit.paths.pathFull pathstring""
grpc.middleware.rateLimit.paths.reqPerSecRequest per second by full pathint0

Timeout

namedescriptiontypedefault value
grpc.middleware.timeout.enabledEnable timeout middlewarebooleanfalse
grpc.middleware.timeout.ignoreThe paths of prefix that will be ignored by middleware[]string[]
grpc.middleware.timeout.timeoutMsGlobal timeout in milliseconds.int5000
grpc.middleware.timeout.paths.pathFull pathstring""
grpc.middleware.timeout.paths.timeoutMsTimeout in milliseconds by full pathint5000

CORS

Middleware for grpc-gateway.

namedescriptiontypedefault value
grpc.middleware.cors.enabledEnable cors middlewarebooleanfalse
grpc.middleware.cors.ignoreThe paths of prefix that will be ignored by middleware[]string[]
grpc.middleware.cors.allowOriginsProvide allowed origins with wildcard enabled.[]string*
grpc.middleware.cors.allowMethodsProvide allowed methods returns as response header of OPTIONS request.[]stringAll http methods
grpc.middleware.cors.allowHeadersProvide allowed headers returns as response header of OPTIONS request.[]stringHeaders from request
grpc.middleware.cors.allowCredentialsReturns as response header of OPTIONS request.boolfalse
grpc.middleware.cors.exposeHeadersProvide exposed headers returns as response header of OPTIONS request.[]string""
grpc.middleware.cors.maxAgeProvide max age returns as response header of OPTIONS request.int0

JWT

rk-grpc using github.com/golang-jwt/jwt/v4, please beware of version compatibility.

In order to make swagger UI and RK tv work under JWT without JWT token, we need to ignore prefixes of paths as bellow.

jwt:
  ...
  ignore:
    - "/sw"
namedescriptiontypedefault value
grpc.middleware.jwt.enabledOptional, Enable JWT middlewarebooleanfalse
grpc.middleware.jwt.ignoreOptional, Provide ignoring path prefix.[]string[]
grpc.middleware.jwt.signerEntryOptional, Provide signerEntry name.string""
grpc.middleware.jwt.symmetric.algorithmRequired if symmetric specified. One of HS256, HS384, HS512string""
grpc.middleware.jwt.symmetric.tokenOptional, raw token for signing and verificationstring""
grpc.middleware.jwt.symmetric.tokenPathOptional, path of token filestring""
grpc.middleware.jwt.asymmetric.algorithmRequired if symmetric specified. One of RS256, RS384, RS512, ES256, ES384, ES512string""
grpc.middleware.jwt.asymmetric.privateKeyOptional, raw private key file for signingstring""
grpc.middleware.jwt.asymmetric.privateKeyPathOptional, private key file path for signingstring""
grpc.middleware.jwt.asymmetric.publicKeyOptional, raw public key file for verificationstring""
grpc.middleware.jwt.asymmetric.publicKeyPathOptional, public key file path for verificationstring""
grpc.middleware.jwt.tokenLookupProvide token lookup scheme, please see bellow description.string"header:Authorization"
grpc.middleware.jwt.authSchemeProvide auth scheme.stringBearer

The supported scheme of tokenLookup

// Optional. Default value "header:Authorization".
// Possible values:
// - "header:<name>"
// - "query:<name>"
// Multiply sources example:
// - "header: Authorization,cookie: myowncookie"

Secure

Middleware for grpc-gateway.

namedescriptiontypedefault value
grpc.middleware.secure.enabledEnable secure middlewarebooleanfalse
grpc.middleware.secure.ignoreIgnoring path prefix.[]string[]
grpc.middleware.secure.xssProtectionX-XSS-Protection header value.string"1; mode=block"
grpc.middleware.secure.contentTypeNosniffX-Content-Type-Options header value.stringnosniff
grpc.middleware.secure.xFrameOptionsX-Frame-Options header value.stringSAMEORIGIN
grpc.middleware.secure.hstsMaxAgeStrict-Transport-Security header value.int0
grpc.middleware.secure.hstsExcludeSubdomainsExcluding subdomains of HSTS.boolfalse
grpc.middleware.secure.hstsPreloadEnabledEnabling HSTS preload.boolfalse
grpc.middleware.secure.contentSecurityPolicyContent-Security-Policy header value.string""
grpc.middleware.secure.cspReportOnlyContent-Security-Policy-Report-Only header value.boolfalse
grpc.middleware.secure.referrerPolicyReferrer-Policy header value.string""

CSRF

Middleware for grpc-gateway.

namedescriptiontypedefault value
grpc.middleware.csrf.enabledEnable csrf middlewarebooleanfalse
grpc.middleware.csrf.ignoreIgnoring path prefix.[]string[]
grpc.middleware.csrf.tokenLengthProvide the length of the generated token.int32
grpc.middleware.csrf.tokenLookupProvide csrf token lookup rules, please see code comments for details.string"header:X-CSRF-Token"
grpc.middleware.csrf.cookieNameProvide name of the CSRF cookie. This cookie will store CSRF token.string_csrf
grpc.middleware.csrf.cookieDomainDomain of the CSRF cookie.string""
grpc.middleware.csrf.cookiePathPath of the CSRF cookie.string""
grpc.middleware.csrf.cookieMaxAgeProvide max age (in seconds) of the CSRF cookie.int86400
grpc.middleware.csrf.cookieHttpOnlyIndicates if CSRF cookie is HTTP only.boolfalse
grpc.middleware.csrf.cookieSameSiteIndicates SameSite mode of the CSRF cookie. Options: lax, strict, none, defaultstringdefault

Full YAML

---
#app:
#  name: my-app                                            # Optional, default: "rk-app"
#  version: "v1.0.0"                                       # Optional, default: "v0.0.0"
#  description: "this is description"                      # Optional, default: ""
#  keywords: ["rk", "golang"]                              # Optional, default: []
#  homeUrl: "http://example.com"                           # Optional, default: ""
#  docsUrl: ["http://example.com"]                         # Optional, default: []
#  maintainers: ["rk-dev"]                                 # Optional, default: []
#logger:
#  - name: my-logger                                       # Required
#    description: "Description of entry"                   # Optional
#    domain: "*"                                           # Optional, default: "*"
#    zap:                                                  # Optional
#      level: info                                         # Optional, default: info
#      development: true                                   # Optional, default: true
#      disableCaller: false                                # Optional, default: false
#      disableStacktrace: true                             # Optional, default: true
#      encoding: console                                   # Optional, default: console
#      outputPaths: ["stdout"]                             # Optional, default: [stdout]
#      errorOutputPaths: ["stderr"]                        # Optional, default: [stderr]
#      encoderConfig:                                      # Optional
#        timeKey: "ts"                                     # Optional, default: ts
#        levelKey: "level"                                 # Optional, default: level
#        nameKey: "logger"                                 # Optional, default: logger
#        callerKey: "caller"                               # Optional, default: caller
#        messageKey: "msg"                                 # Optional, default: msg
#        stacktraceKey: "stacktrace"                       # Optional, default: stacktrace
#        skipLineEnding: false                             # Optional, default: false
#        lineEnding: "\n"                                  # Optional, default: \n
#        consoleSeparator: "\t"                            # Optional, default: \t
#      sampling:                                           # Optional, default: nil
#        initial: 0                                        # Optional, default: 0
#        thereafter: 0                                     # Optional, default: 0
#      initialFields:                                      # Optional, default: empty map
#        key: value
#    lumberjack:                                           # Optional, default: nil
#      filename:
#      maxsize: 1024                                       # Optional, suggested: 1024 (MB)
#      maxage: 7                                           # Optional, suggested: 7 (day)
#      maxbackups: 3                                       # Optional, suggested: 3 (day)
#      localtime: true                                     # Optional, suggested: true
#      compress: true                                      # Optional, suggested: true
#    loki:
#      enabled: true                                       # Optional, default: false
#      addr: localhost:3100                                # Optional, default: localhost:3100
#      path: /loki/api/v1/push                             # Optional, default: /loki/api/v1/push
#      username: ""                                        # Optional, default: ""
#      password: ""                                        # Optional, default: ""
#      maxBatchWaitMs: 3000                                # Optional, default: 3000
#      maxBatchSize: 1000                                  # Optional, default: 1000
#      insecureSkipVerify: false                           # Optional, default: false
#      labels:                                             # Optional, default: empty map
#        my_label_key: my_label_value
#event:
#  - name: my-event                                        # Required
#    description: "Description of entry"                   # Optional
#    domain: "*"                                           # Optional, default: "*"
#    encoding: console                                     # Optional, default: console
#    outputPaths: ["stdout"]                               # Optional, default: [stdout]
#    lumberjack:                                           # Optional, default: nil
#      filename:
#      maxsize: 1024                                       # Optional, suggested: 1024 (MB)
#      maxage: 7                                           # Optional, suggested: 7 (day)
#      maxbackups: 3                                       # Optional, suggested: 3 (day)
#      localtime: true                                     # Optional, suggested: true
#      compress: true                                      # Optional, suggested: true
#    loki:
#      enabled: true                                       # Optional, default: false
#      addr: localhost:3100                                # Optional, default: localhost:3100
#      path: /loki/api/v1/push                             # Optional, default: /loki/api/v1/push
#      username: ""                                        # Optional, default: ""
#      password: ""                                        # Optional, default: ""
#      maxBatchWaitMs: 3000                                # Optional, default: 3000
#      maxBatchSize: 1000                                  # Optional, default: 1000
#      insecureSkipVerify: false                           # Optional, default: false
#      labels:                                             # Optional, default: empty map
#        my_label_key: my_label_value
#cert:
#  - name: my-cert                                         # Required
#    description: "Description of entry"                   # Optional, default: ""
#    domain: "*"                                           # Optional, default: "*"
#    caPath: "certs/ca.pem"                                # Optional, default: ""
#    certPemPath: "certs/server-cert.pem"                  # Optional, default: ""
#    keyPemPath: "certs/server-key.pem"                    # Optional, default: ""
#config:
#  - name: my-config                                       # Required
#    description: "Description of entry"                   # Optional, default: ""
#    domain: "*"                                           # Optional, default: "*"
##    path: "config/config.yaml"                            # Optional
#    envPrefix: ""                                         # Optional, default: ""
#    content:                                              # Optional, defualt: empty map
#      key: value
grpc:
  - name: greeter                                          # Required
    enabled: true                                          # Required
    port: 8080                                             # Required
#    description: "greeter server"                         # Optional, default: ""
#    enableReflection: true                                # Optional, default: false
#    enableRkGwOption: true                                # Optional, default: false
#    gwOption:                                             # Optional, default: nil
#      marshal:                                            # Optional, default: nil
#        multiline: false                                  # Optional, default: false
#        emitUnpopulated: false                            # Optional, default: false
#        indent: ""                                        # Optional, default: false
#        allowPartial: false                               # Optional, default: false
#        useProtoNames: false                              # Optional, default: false
#        useEnumNumbers: false                             # Optional, default: false
#      unmarshal:                                          # Optional, default: nil
#        allowPartial: false                               # Optional, default: false
#        discardUnknown: false                             # Optional, default: false
#    noRecvMsgSizeLimit: true                              # Optional, default: false
#    certEntry: my-cert                                    # Optional, default: "", reference of cert entry declared above
#    loggerEntry: my-logger                                # Optional, default: "", reference of cert entry declared above, STDOUT will be used if missing
#    eventEntry: my-event                                  # Optional, default: "", reference of cert entry declared above, STDOUT will be used if missing
#    sw:
#      enabled: true                                       # Optional, default: false
#      path: "sw"                                          # Optional, default: "sw"
#      jsonPath: ""                                        # Optional
#      headers: ["sw:rk"]                                  # Optional, default: []
#    docs:
#      enabled: true                                       # Optional, default: false
#      path: "docs"                                        # Optional, default: "docs"
#      specPath: ""                                        # Optional
#      headers: ["sw:rk"]                                  # Optional, default: []
#      style:                                              # Optional
#        theme: "light"                                    # Optional, default: "light"
#      debug: false                                        # Optional, default: false
#    commonService:
#      enabled: true                                       # Optional, default: false
#    static:
#      enabled: true                                       # Optional, default: false
#      path: "/static"                                     # Optional, default: /static
#      sourceType: local                                   # Required, options: pkger, local
#      sourcePath: "."                                     # Required, full path of source directory
#    pprof:
#      enabled: true                                       # Optional, default: false
#      path: "/pprof"                                      # Optional, default: /pprof
#    prom:
#      enabled: true                                       # Optional, default: false
#      path: ""                                            # Optional, default: "metrics"
#      pusher:
#        enabled: false                                    # Optional, default: false
#        jobName: "greeter-pusher"                         # Required
#        remoteAddress: "localhost:9091"                   # Required
#        basicAuth: "user:pass"                            # Optional, default: ""
#        intervalMs: 10000                                 # Optional, default: 1000
#        certEntry: my-cert                                # Optional, default: "", reference of cert entry declared above
#    middleware:
#      ignore: [""]                                        # Optional, default: []
#      logging:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        loggerEncoding: "console"                         # Optional, default: "console"
#        loggerOutputPaths: ["logs/app.log"]               # Optional, default: ["stdout"]
#        eventEncoding: "console"                          # Optional, default: "console"
#        eventOutputPaths: ["logs/event.log"]              # Optional, default: ["stdout"]
#      prom:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#      auth:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        basic:
#          - "user:pass"                                   # Optional, default: []
#        apiKey:
#          - "keys"                                        # Optional, default: []
#      meta:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        prefix: "rk"                                      # Optional, default: "rk"
#      trace:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        exporter:                                         # Optional, default will create a stdout exporter
#          file:
#            enabled: true                                 # Optional, default: false
#            outputPath: "logs/trace.log"                  # Optional, default: stdout
#          jaeger:
#            agent:
#              enabled: false                              # Optional, default: false
#              host: ""                                    # Optional, default: localhost
#              port: 0                                     # Optional, default: 6831
#            collector:
#              enabled: true                               # Optional, default: false
#              endpoint: ""                                # Optional, default: http://localhost:14268/api/traces
#              username: ""                                # Optional, default: ""
#              password: ""                                # Optional, default: ""
#      rateLimit:
#        enabled: false                                    # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        algorithm: "leakyBucket"                          # Optional, default: "tokenBucket"
#        reqPerSec: 100                                    # Optional, default: 1000000
#        paths:
#          - path: "/rk/v1/healthy"                        # Optional, default: ""
#            reqPerSec: 0                                  # Optional, default: 1000000
#      timeout:
#        enabled: false                                    # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        timeoutMs: 5000                                   # Optional, default: 5000
#        paths:
#          - path: "/rk/v1/healthy"                        # Optional, default: ""
#            timeoutMs: 1000                               # Optional, default: 5000
#      jwt:
#        enabled: true                                     # Optional, default: false
#        ignore: [ "" ]                                    # Optional, default: []
#        signerEntry: ""                                   # Optional, default: ""
#        symmetric:                                        # Optional
#          algorithm: ""                                   # Required, default: ""
#          token: ""                                       # Optional, default: ""
#          tokenPath: ""                                   # Optional, default: ""
#        asymmetric:                                       # Optional
#          algorithm: ""                                   # Required, default: ""
#          privateKey: ""                                  # Optional, default: ""
#          privateKeyPath: ""                              # Optional, default: ""
#          publicKey: ""                                   # Optional, default: ""
#          publicKeyPath: ""                               # Optional, default: ""
#        tokenLookup: "header:<name>"                      # Optional, default: "header:Authorization"
#        authScheme: "Bearer"                              # Optional, default: "Bearer"
#      secure:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        xssProtection: ""                                 # Optional, default: "1; mode=block"
#        contentTypeNosniff: ""                            # Optional, default: nosniff
#        xFrameOptions: ""                                 # Optional, default: SAMEORIGIN
#        hstsMaxAge: 0                                     # Optional, default: 0
#        hstsExcludeSubdomains: false                      # Optional, default: false
#        hstsPreloadEnabled: false                         # Optional, default: false
#        contentSecurityPolicy: ""                         # Optional, default: ""
#        cspReportOnly: false                              # Optional, default: false
#        referrerPolicy: ""                                # Optional, default: ""
#      csrf:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        tokenLength: 32                                   # Optional, default: 32
#        tokenLookup: "header:X-CSRF-Token"                # Optional, default: "header:X-CSRF-Token"
#        cookieName: "_csrf"                               # Optional, default: _csrf
#        cookieDomain: ""                                  # Optional, default: ""
#        cookiePath: ""                                    # Optional, default: ""
#        cookieMaxAge: 86400                               # Optional, default: 86400
#        cookieHttpOnly: false                             # Optional, default: false
#        cookieSameSite: "default"                         # Optional, default: "default", options: lax, strict, none, default
#      gzip:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        level: bestSpeed                                  # Optional, options: [noCompression, bestSpeed, bestCompression, defaultCompression, huffmanOnly]
#      cors:
#        enabled: true                                     # Optional, default: false
#        ignore: [""]                                      # Optional, default: []
#        allowOrigins:                                     # Optional, default: []
#          - "http://localhost:*"                          # Optional, default: *
#        allowCredentials: false                           # Optional, default: false
#        allowHeaders: []                                  # Optional, default: []
#        allowMethods: []                                  # Optional, default: []
#        exposeHeaders: []                                 # Optional, default: []
#        maxAge: 0                                         # Optional, default: 0

Notice of V2

Master branch of this package is under upgrade which will be released to v2.x.x soon.

Major changes listed bellow. This will be updated with every commit.

Last versionNew versionChanges
v1.2.22v2TV is not supported because of LICENSE issue, new TV web UI will be released soon
v1.2.22v2Remote repositry of ConfigEntry and CertEntry removed
v1.2.22v2Swagger json file and boot.yaml file could be embed into embed.FS and pass to rkentry
v1.2.22v2ZapLoggerEntry -> LoggerEntry
v1.2.22v2EventLoggerEntry -> EventEntry
v1.2.22v2LoggerEntry can be used as zap.Logger since all functions are inherited
v1.2.22v2PromEntry can be used as prometheus.Registry since all functions are inherited
v1.2.22v2rk-common dependency was removed
v1.2.22v2Entries are organized by EntryType instead of EntryName, so user can have same entry name with different EntryType
v1.2.22v2grpc.interceptors -> gin.middleware in boot.yaml
v1.2.22v2grpc.interceptors.loggingZap -> gin.middleware.logging in boot.yaml
v1.2.22v2grpc.interceptors.metricsProm -> gin.middleware.prom in boot.yaml
v1.2.22v2grpc.interceptors.tracingTelemetry -> gin.middleware.trace in boot.yaml
v1.2.22v2All middlewares are now support gin.middleware.xxx.ignorePrefix options in boot.yaml
v1.2.22v2Middlewares support gin.middleware.ignorePrefix in boot.yaml as global scope
v1.2.22v2LoggerEntry, EventEntry, ConfigEntry, CertEntry now support locale to distinguish in differerent environment
v1.2.22v2LoggerEntry, EventEntry, CertEntry can be referenced to gin entry in boot.yaml
v1.2.22v2Healthy API was replaced by Ready and Alive which also provides validation func from user
v1.2.22v2DocsEntry was added into rk-entry
v1.2.22v2rk-entry support utility functions of embed.FS
v1.2.22v2rk-entry bumped up to v2

Development Status: Stable

Build instruction

Simply run make all to validate your changes. Or run codes in example/ folder.

  • make all

Run unit-test, golangci-lint, doctoc and gofmt.

  • make buf

Test instruction

Run unit test with make test command.

github workflow will automatically run unit test and golangci-lint for testing and lint validation.

Contributing

We encourage and support an active, healthy community of contributors; including you! Details are in the contribution guide and the code of conduct. The rk maintainers keep an eye on issues and pull requests, but you can also report any negative conduct to lark@rkdev.info.

Documentation

Author: Rookie-ninja
Source Code: https://github.com/rookie-ninja/rk-grpc 
License: Apache-2.0 license

#go #golang #swagger #grpc 

RK-grpc: gRPC Related Entry