Fabiola  Auma

Fabiola Auma

1667906100

Ansible Nomad: Ansible Role for Nomad

Ansible-Nomad


This role was previously maintained by Brian Shumate and is now curated by @ansible-community/hashicorp-tools.


This Ansible role performs basic Nomad installation, including filesystem structure, and example configuration.

It will also bootstrap a minimal cluster of 3 server nodes, and can do this in a development environment based on Vagrant and VirtualBox. See README_VAGRANT.md for more details about the Vagrant setup.

Requirements

This role requires an Arch Linux, Debian, RHEL, or Ubuntu distribution; the role is tested with the following specific software versions:

  • Ansible: 2.7.10
  • nomad: 0.12.1
  • Arch Linux
  • CentOS: 7
  • Debian: 8
  • RHEL: 7
  • Ubuntu: 16.04
  • unzip for unarchive module

Role Variables

The role defines most of its variables in defaults/main.yml:

nomad_debug

  • Nomad debug mode
  • Default value: no

nomad_skip_ensure_all_hosts

  • Allow running the role even if not all instances are connected
  • Default value: no

nomad_allow_purge_config

  • Allow purging obsolete configuration files. For example, remove server configuration if instance is no longer a server
  • Default value: no

nomad_version

  • Nomad version to install
  • Default value: 1.1.1

nomad_architecture_map

  • This variable does not need to be changed in most cases
  • Default value: Dictionary translating ansible_architecture to HashiCorp architecture naming convention

nomad_architecture

  • Host architecture
  • Default value: determined by {{ nomad_architecture_map[ansible_architecture] }}

nomad_pkg

  • Nomad package filename
  • Default value: nomad_{{ nomad_version }}_linux_{{ nomad_architecture }}.zip

nomad_zip_url

  • Nomad download URL
  • Default value: https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version }}_linux_{{ nomad_architecture }}.zip

nomad_checksum_file_url

  • Nomad checksum file URL
  • Default value: https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version}}_SHA256SUMS

nomad_bin_dir

  • Nomad binary installation path
  • Default value: /usr/local/bin

nomad_config_dir

  • Nomad configuration file path
  • Default value: /etc/nomad.d

nomad_data_dir

  • Nomad data path
  • Default value: /var/nomad

nomad_lockfile

  • Nomad lockfile path
  • Default value: /var/lock/subsys/nomad

nomad_run_dir

  • Nomad run path
  • Default value: /var/run/nomad

nomad_manage_user

  • Manage Nomad user?
  • Default value: yes

nomad_user

  • Nomad OS username
  • Default value: root

nomad_manage_group

  • Manage Nomad group?
  • Default value: no

nomad_group

  • Nomad OS group
  • Default value: bin

nomad_region

  • Default region
  • Default value: global

nomad_datacenter

  • Nomad datacenter label
  • Default value: dc1

nomad_log_level

  • Logging level
  • Default value: INFO

nomad_syslog_enable

  • Log to syslog
  • Default value: true

nomad_iface

  • Nomad network interface
  • Default value: {{ ansible_default_ipv4.interface }}

nomad_node_name

  • Nomad node name
  • Default value: {{ inventory_hostname_short }}

nomad_node_role

  • Nomad node role
  • options: client, server, both
  • Default value: client

nomad_leave_on_terminate

  • Send leave on termination
  • Default value: yes

nomad_leave_on_interrupt

  • Send leave on interrupt
  • Default value: no

nomad_disable_update_check

  • Disable update check
  • Default value: no

nomad_retry_max

  • Max retry join attempts
  • Default value: 0

nomad_retry_join

  • Enable retry join?
  • Default value: no

nomad_retry_interval

  • Retry join interval
  • Default value: 30s

nomad_rejoin_after_leave

  • Rejoin after leave?
  • Default value: no

nomad_enabled_schedulers

  • List of enabled schedulers
  • Default value: service, batch, system

nomad_num_schedulers

  • Number of schedulers
  • Default value: {{ ansible_processor_vcpus }}

nomad_node_gc_threshold

  • Node garbage collection threshold
  • Default value: 24h

nomad_job_gc_threshold

  • Job garbage collection threshold
  • Default value: 4h

nomad_eval_gc_threshold

  • Eval garbage collection threshold
  • Default value: 1h

nomad_deployment_gc_threshold

  • Deployment garbage collection threshold
  • Default value: 1h

nomad_encrypt_enable

  • Enable Gossip Encryption even if nomad_encrypt is not set
  • Default value: false

nomad_encrypt

  • Set the encryption key; should be the same across a cluster. If not present and nomad_encrypt_enable is true, the key will be generated & retrieved from the bootstrapped server.
  • Default value: ""

nomad_raft_protocol

  • Specifies the version of raft protocal, which used by nomad servers for communication
  • Default value: 2

nomad_authoritative_region

  • Specifies the authoritative region, which provides a single source of truth for global configurations such as ACL Policies and global ACL tokens.
  • Default value: ""

nomad_node_class

  • Nomad node class
  • Default value: ""

nomad_no_host_uuid

  • Force the UUID generated by the client to be randomly generated
  • Default value: no

nomad_max_kill_timeout

  • Max kill timeout
  • Default value: 30s

nomad_network_interface

  • Nomad scheduler will choose from the IPs of this interface for allocating tasks
  • Default value: none

nomad_network_speed

  • Overide network link speed (0 = no overide)
  • Default value: 0

nomad_cpu_total_compute

  • Overide cpu compute (0 = no overide)
  • Default value: 0

nomad_gc_interval

  • Client garbage collection interval
  • Default value: 1m

nomad_gc_disk_usage_threshold

  • Disk usage threshold percentage for garbage collection
  • Default value: 80

nomad_gc_inodes_usage_threshold

  • Inode usage threshold percentage for garbage collection
  • Default value: 70

nomad_gc_parallel_destroys

  • Garbage collection max parallel destroys
  • Default value: 2

nomad_reserved

  • Reserved client resources
  • Default value: cpu: {{ nomad_reserved_cpu }}, memory: {{ nomad_reserved_memory }}, disk: {{ nomad_reserved_disk }}, ports: {{ nomad_reserved_ports }}

nomad_reserved_cpu

  • Reserved client CPU
  • Default value: 0

nomad_reserved_memory

  • Reserved client memory
  • Default value: 0

nomad_reserved_disk

  • Reserved client disk
  • Default value: 0

nomad_reserved_ports

  • Reserved client ports
  • Default value: 22

nomad_host_volumes

  • List host_volume is used to make volumes available to jobs (Stateful Workloads).
  • Default value: []
  • Example:
nomad_host_volumes:
  - name: data
    path: /var/data
    owner: root
    group: bin
    mode: 0755
    read_only: false
  - name: config
    path: /etc/conf
    owner: root
    group: bin
    mode: 0644
    read_only: false

nomad_host_networks

  • List host_network is used to make different networks available to jobs instead of selecting a default interface. This is very useful especially in case of multiple nics.
  • Default value: []
  • Example:
nomad_host_networks:
  - name: public
    cidr: 100.101.102.103/24
    reserved_ports: 22,80
  - name: private
    interface: eth0
    reserved_ports: 443

nomad_options

  • Driver options
  • Key value dict
  • Default value: {}

nomad_chroot_env

  • chroot environment definition for the Exec and Java drivers
  • Key value dict
  • Default value: false

nomad_meta

  • Meta data
  • Key value dict
  • Default value: {}

nomad_bind_address

  • Bind interface address
  • Default value: {{ hostvars[inventory_hostname]['ansible_'+ nomad_iface ]['ipv4']['address'] }}

nomad_advertise_address

  • Network interface address to advertise to other nodes
  • Default value: {{ hostvars[inventory_hostname]['ansible_'+ nomad_iface ]['ipv4']['address'] }}

nomad_ports

  • Ports used by Nomad
  • Default value: http: {{ nomad_ports_http }}, rpc: {{ nomad_ports_rpc }}, serf: {{ nomad_ports_serf }}

nomad_ports_http

  • Http port
  • Default value: 4646

nomad_ports_rpc

  • RPC port
  • Default value: 4647

nomad_ports_serf

  • Serf port
  • Default value: 4648

nomad_podman_enable

  • Installs the podman plugin
  • Default value: false

nomad_cni_enable

  • Installs the cni plugins
  • Default value: false

nomad_docker_enable

  • Install Docker subsystem on nodes?
  • Default value: false

nomad_plugins

  • Allow you configure nomad plugins.
  • Default: {}

Example:

nomad_plugins:
  nomad-driver-podman:
    config:
      volumes:
        enabled: true
        selinuxlabel: z
      recover_stopped: true

nomad_group_name

  • Ansible group that contains all cluster nodes
  • Default value: nomad_instances

nomad_servers

It's typically not necessary to manually alter this list.

  • List of server nodes
  • Default value: List of all nodes in nomad_group_name with nomad_node_role set to server or both

nomad_gather_server_facts

This feature makes it possible to gather the nomad_bind_address and nomad_advertise_address from servers that are currently not targeted by the playbook.

To make this possible the delegate_facts option is used. This option is broken in many Ansible versions, so this feature might not always work.

  • Gather facts from servers that are not currently targeted
  • Default value: 'no'

nomad_use_consul

  • Bootstrap nomad via native consul zero-configuration support assumes consul default ports etc.
  • Default value: False

nomad_consul_address

  • The address of your consul API, use it in combination with nomad_use_consul=True. If you want to use https, use nomad_consul_ssl. Do NOT append https.
  • Default value: localhost:8500

nomad_consul_ssl

  • If true then uses https.
  • Default value: false

nomad_consul_ca_file

  • Public key of consul CA, use in combination with nomad_consul_cert_file and nomad_consul_key_file.
  • Default value: ""

nomad_consul_cert_file

  • The public key which can be used to access consul.
  • Default value: ""

nomad_consul_key_file

  • The private key counterpart of nomad_consul_cert_file.
  • Default value: ""

nomad_consul_servers_service_name

  • The name of the consul service for your nomad servers
  • Default value: nomad-servers

nomad_consul_clients_service_name

  • The name of the consul service for your nomad clients
  • Default value: nomad-clients

nomad_consul_token

  • Token to use for consul interaction
  • Default value: ""

nomad_bootstrap_expect

  • Specifies the number of server nodes to wait for before bootstrapping.
  • Default value: `{{ nomad_servers | count or 3 }}}

nomad_acl_enabled

  • Enable ACLs
  • Default value: no

nomad_acl_token_ttl

  • TTL for tokens
  • Default value: "30s"

nomad_acl_policy_ttl

  • TTL for policies
  • Default value: "30s"

nomad_acl_replication_token

  • Token to use for acl replication on non authoritive servers
  • Default value: ""

nomad_vault_enabled

  • Enable vault
  • Default value: no

nomad_vault_address

  • Vault address to use
  • Default value: {{ vault_address | default('0.0.0.0') }}

nomad_vault_allow_unauthenticated

  • Allow users to use vault without providing their own token
  • Default value: yes

nomad_vault_create_from_role

  • Role to create tokens from
  • Default value: ""

nomad_vault_ca_file

  • Path of CA cert to use with vault
  • Default value: ""

nomad_vault_ca_path

  • Path of a folder containing CA cert(s) to use with vault
  • Default value: ""

nomad_vault_cert_file

  • Path to a certificate to use with vault
  • Default value: ""

nomad_vault_key_file

  • Path to a private key file to use with vault
  • Default value: ""

nomad_vault_tls_server_name

  • Optional string used to set SNI host when connecting to vault
  • Default value: ""

nomad_vault_tls_skip_verify

  • Specifies if SSL peer validation should be enforced
  • Default value: no

nomad_vault_token

  • Vault token used by nomad. Will only be installed on servers.
  • Default value: ""

nomad_vault_namespace

  • Vault namespace used by nomad
  • Default value: ""

nomad_docker_enable

  • Enable docker
  • Default value: no

nomad_docker_dmsetup

  • Run dmsetup on ubuntu (only if docker is enabled)
  • Default value: yes

nomad_tls_enable

  • Enable TLS
  • Default value: false

nomad_tls_copy_keys: false

  • Whether to copy certs from local machine (controller).
  • Default value: false

nomad_tls_files_remote_src

  • Whether to copy certs from remote machine itself.
  • Default value: false

nomad_tls_dir

  • The remote dir where the certs are stored.
  • Default value: /etc/nomad/ssl

nomad_ca_file

  • Use a ca for tls connection, nomad_cert_file and nomad_key_file are needed
  • Default value: ca.cert

nomad_cert_file

  • Use a certificate for tls connection, nomad_ca_file and nomad_key_file are needed
  • Default value: server.crt

nomad_key_file

  • Use a key for tls connection, nomad_cert_file and nomad_key_file are needed
  • Default value: server.key

nomad_rpc_upgrade_mode

  • Use a certificate for tls connection, nomad_ca_file and nomad_key_file are needed, used only when the cluster is being upgraded to TLS, and removed after the migration is complete. This allows the agent to accept both TLS and plaintext traffic.
  • Default value: false

nomad_verify_server_hostname

  • Use a key for tls connection, nomad_cert_file and nomad_key_file are needed. Specifies if outgoing TLS connections should verify the server's hostname.
  • Default value: true

nomad_verify_https_client

  • Use a key for tls connection, nomad_cert_file and nomad_key_file are needed. Specifies agents should require client certificates for all incoming HTTPS requests. The client certificates must be signed by the same CA as Nomad.
  • Default value: true

nomad_telemetry

  • Specifies whether to enable Nomad's telemetry configuration.
  • Default value: false

nomad_telemetry_disable_hostname

  • Specifies if gauge values should be prefixed with the local hostname.
  • Default value: "false"

nomad_telemetry_collection_interval

  • Specifies the time interval at which the Nomad agent collects telemetry data.
  • Default value: "1s"

nomad_telemetry_use_node_name

  • Specifies if gauge values should be prefixed with the name of the node, instead of the hostname. If set it will override disable_hostname value.
  • Default value: "false"

nomad_telemetry_publish_allocation_metrics

  • Specifies if Nomad should publish runtime metrics of allocations.
  • Default value: "false"

nomad_telemetry_publish_node_metrics

  • Specifies if Nomad should publish runtime metrics of nodes.
  • Default value: "false"

nomad_telemetry_backwards_compatible_metrics

  • Specifies if Nomad should publish metrics that are backwards compatible with versions below 0.7, as post version 0.7, Nomad emits tagged metrics. All new metrics will only be added to tagged metrics. Note that this option is used to transition monitoring to tagged metrics and will eventually be deprecated.
  • Default value: "false"

nomad_telemetry_disable_tagged_metrics

  • Specifies if Nomad should not emit tagged metrics and only emit metrics compatible with versions below Nomad 0.7. Note that this option is used to transition monitoring to tagged metrics and will eventually be deprecated.
  • Default value: "false"

nomad_telemetry_filter_default

  • This controls whether to allow metrics that have not been specified by the filter. Defaults to true, which will allow all metrics when no filters are provided. When set to false with no filters, no metrics will be sent.
  • Default value: "true"

nomad_telemetry_prefix_filter

  • This is a list of filter rules to apply for allowing/blocking metrics by prefix. A leading "+" will enable any metrics with the given prefix, and a leading "-" will block them. If there is overlap between two rules, the more specific rule will take precedence. Blocking will take priority if the same prefix is listed multiple times.
  • Default value: []

nomad_telemetry_disable_dispatched_job_summary_metrics

  • Specifies if Nomad should ignore jobs dispatched from a parameterized job when publishing job summary statistics. Since each job has a small memory overhead for tracking summary statistics, it is sometimes desired to trade these statistics for more memory when dispatching high volumes of jobs.
  • Default value: "false"

nomad_telemetry_statsite_address

  • Specifies the address of a statsite server to forward metrics data to.
  • Default value: ""

nomad_telemetry_statsd_address

  • Specifies the address of a statsd server to forward metrics to.
  • Default value: ""

nomad_telemetry_datadog_address

  • Specifies the address of a DataDog statsd server to forward metrics to.
  • Default value: ""

nomad_telemetry_datadog_tags

  • Specifies a list of global tags that will be added to all telemetry packets sent to DogStatsD. It is a list of strings, where each string looks like "my_tag_name:my_tag_value".
  • Default value: []

nomad_telemetry_prometheus_metrics

  • Specifies whether the agent should make Prometheus formatted metrics available at /v1/metrics?format=prometheus.
  • Default value: "false"

nomad_telemetry_circonus_api_token

  • Specifies a valid Circonus API Token used to create/manage check. If provided, metric management is enabled.
  • Default value: ""

nomad_telemetry_circonus_api_app

  • Specifies a valid app name associated with the API token.
  • Default value: "nomad"

nomad_telemetry_circonus_api_url

nomad_telemetry_circonus_submission_interval

  • Specifies the interval at which metrics are submitted to Circonus.
  • Default value: "10s"

nomad_telemetry_circonus_submission_url

  • Specifies the check.config.submission_url field, of a Check API object, from a previously created HTTPTRAP check.
  • Default value: ""

nomad_telemetry_circonus_check_id

  • Specifies the Check ID (not check bundle) from a previously created HTTPTRAP check. The numeric portion of the check._cid field in the Check API object.
  • Default value: ""

nomad_telemetry_circonus_check_force_metric_activation

  • Specifies if force activation of metrics which already exist and are not currently active. If check management is enabled, the default behavior is to add new metrics as they are encountered. If the metric already exists in the check, it will not be activated. This setting overrides that behavior.
  • Default value: "false"

nomad_telemetry_circonus_check_instance_id

  • Serves to uniquely identify the metrics coming from this instance. It can be used to maintain metric continuity with transient or ephemeral instances as they move around within an infrastructure. By default, this is set to hostname:application name (e.g. "host123:nomad").
  • Default value: ""

nomad_telemetry_circonus_check_search_tag

  • Specifies a special tag which, when coupled with the instance id, helps to narrow down the search results when neither a Submission URL or Check ID is provided. By default, this is set to service:app (e.g. "service:nomad").
  • Default value: ""

nomad_telemetry_circonus_check_display_name

  • Specifies a name to give a check when it is created. This name is displayed in the Circonus UI Checks list.
  • Default value: ""

nomad_telemetry_circonus_check_tags

  • Comma separated list of additional tags to add to a check when it is created.
  • Default value: ""

nomad_telemetry_circonus_broker_id

  • Specifies the ID of a specific Circonus Broker to use when creating a new check. The numeric portion of broker._cid field in a Broker API object. If metric management is enabled and neither a Submission URL nor Check ID is provided, an attempt will be made to search for an existing check using Instance ID and Search Tag. If one is not found, a new HTTPTRAP check will be created. By default, this is a random Enterprise Broker is selected, or, the default Circonus Public Broker.
  • Default value: ""

nomad_telemetry_circonus_broker_select_tag

  • Specifies a special tag which will be used to select a Circonus Broker when a Broker ID is not provided. The best use of this is to as a hint for which broker should be used based on where this particular instance is running (e.g. a specific geographic location or datacenter, dc:sfo).
  • Default value: ""

nomad_autopilot

  • Enable Nomad Autopilot
  • To enable Autopilot features (with the exception of dead server cleanup), the raft_protocol setting in the server stanza must be set to 3 on all servers, see parameter nomad_raft_protocol
  • Default value: false

nomad_autopilot_cleanup_dead_servers

  • Specifies automatic removal of dead server nodes periodically and whenever a new server is added to the cluster.
  • Default value: true

nomad_autopilot_last_contact_threshold

  • Specifies the maximum amount of time a server can go without contact from the leader before being considered unhealthy.
  • Default value: 200ms

nomad_autopilot_max_trailing_logs

  • Specifies the maximum number of log entries that a server can trail the leader by before being considered unhealthy.
  • Default value: 250

nomad_autopilot_server_stabilization_time

  • Specifies the minimum amount of time a server must be stable in the 'healthy' state before being added to the cluster. Only takes effect if all servers are running Raft protocol version 3 or higher.
  • Default value: 10s

Custom Configuration Section

As Nomad loads the configuration from files and directories in lexical order, typically merging on top of previously parsed configuration files, you may set custom configurations via nomad_config_custom, which will be expanded into a file named custom.json within your nomad_config_dir which will be loaded after all other configuration by default.

An example usage for enabling vault:

  vars:
    nomad_config_custom:
      vault:
        enabled          : true
        ca_path          : "/etc/certs/ca"
        cert_file        : "/var/certs/vault.crt"
        key_file         : "/var/certs/vault.key"
        address          : "https://vault.service.consul:8200"
        create_from_role : "nomad-cluster"

Dependencies

Ansible requires GNU tar and this role performs some local use of the unarchive module, so ensure that your system has gtar/unzip installed. Jinja2 templates use ipaddr filter that need netaddr python library.

Example Playbook

Basic nomad installation is possible using the included site.yml playbook:

ansible-playbook -i <hosts> site.yml

You can also simply pass variables in using the --extra-vars option to the ansible-playbook command:

ansible-playbook -i hosts site.yml --extra-vars "nomad_datacenter=maui"

Vagrant and VirtualBox

See examples/README_VAGRANT.md for details on quick Vagrant deployments under VirtualBox for testing, etc.

Contributors

Special thanks to the folks listed in CONTRIBUTORS.md for their contributions to this project.

Contributions are welcome, provided that you can agree to the terms outlined in CONTRIBUTING.md


Download Details:

Author: ansible-community
Source Code: https://github.com/ansible-community/ansible-nomad

License: BSD-2-Clause license

#ansible 

What is GEEK

Buddha Community

Ansible Nomad: Ansible Role for Nomad

Awesome Ansible List

Awesome Ansible

A collaborative curated list of awesome Ansible resources, tools, Roles, tutorials and other related stuff.

Ansible is an open source toolkit, written in Python, it is used for configuration management, application deployment, continuous delivery, IT infrastructure automation and automation in general.

Official resources

Official resources by and for Ansible.

Community

Places where to chat with the Ansible community

Tutorials

Tutorials and courses to learn Ansible.

Books

Books about Ansible.

Videos

Video tutorials and Ansible training.

Tools

Tools for and using Ansible.

  • Ansible Tower - Ansible Tower by Red Hat helps you scale IT automation, manage complex deployments and speed productivity. Extend the power of Ansible to your entire team.
  • AWX - AWX provides a web-based user interface, REST API, and task engine built on top of Ansible. It is the upstream project for Tower, a commercial derivative of AWX.
  • Ansible Lint - Checks Playbooks for best practices and behavior that could potentially be improved.
  • Ansible Later - Another best practice scanner. Checks Playbooks and Roles for best practices and behavior that could potentially be improved.
  • Ansible Doctor - Simple annotation like documentation generator for Ansible roles based on Jinja2 templates.
  • Ansible cmdb - Takes the output of Ansible's fact gathering and converts it into a static HTML page.
  • ARA - ARA Records Ansible playbooks and makes them easier to understand and troubleshoot with a reporting API, UI and CLI.
  • Mitogen for Ansible - Speed up Ansible substantially with Mitogen.
  • Molecule - Molecule aids in the development and testing of Ansible roles.
  • Packer Ansible Provisioner - This Provisioner can be used to automate VM Image creation via Packer with Ansible.
  • Excel Ansible Inventory - Turn any Excel Spreadsheet into an Ansible Inventory.
  • terraform.py - Ansible dynamic inventory script for parsing Terraform state files.
  • ansible-navigator - A text-based user interface (TUI) for Ansible.
  • squest - Self-service portal for Ansible Tower job templates.
  • ansible-bender - Tool which bends containers using Ansible playbooks and turns them into container images.
  • ansible-runner - A tool and python library that helps when interfacing with Ansible directly or as part of another system whether that be through a container image interface, as a standalone tool, or as a Python module that can be imported.
  • ansible-builder - Using Ansible content that depends on non-default dependencies can be tricky. Packages must be installed on each node, play nicely with other software installed on the host system, and be kept in sync.
  • kics - SAST Tool that scans your ansible infrastructure as code playbooks for security vulnverables, compliance issues and misconfigurations.
  • php-ansible Library - OOP-Wrapper for Ansible, making Ansible available in PHP.
  • TD4A - Design aid for building and testing jinja2 templates, combines data in yaml format with a jinja2 template and render the output.
  • Ansible Playbook Grapher - Command line tool to create a graph representing your Ansible playbook plays, tasks and roles.
  • ansible-doc-extractor - A tool that extracts documentation from Ansible modules in the HTML form.
  • Ansible Semaphore - Ansible Semaphore is a modern UI for Ansible.

Blog posts and opinions

Best practices and other opinions on Ansible.

German

Playbooks, Roles and Collections

Awesome production ready Playbooks, Roles and Collections to get you up and running.


Download Details:

Author: ansible-community
Source Code: https://github.com/ansible-community/awesome-ansible

License: CC0-1.0 license

#ansible 

Nigel  Uys

Nigel Uys

1673452680

Ansible-role-nginx: Ansible Role - Nginx

Ansible Role: Nginx

Note: Please consider using the official NGINX Ansible role from NGINX, Inc.

Installs Nginx on RedHat/CentOS, Debian/Ubuntu, Archlinux, FreeBSD or OpenBSD servers.

This role installs and configures the latest version of Nginx from the Nginx yum repository (on RedHat-based systems), apt (on Debian-based systems), pacman (Archlinux), pkgng (on FreeBSD systems) or pkg_add (on OpenBSD systems). You will likely need to do extra setup work after this role has installed Nginx, like adding your own [virtualhost].conf file inside /etc/nginx/conf.d/, describing the location and options to use for your particular website.

Requirements

None.

Role Variables

Available variables are listed below, along with default values (see defaults/main.yml):

nginx_listen_ipv6: true

Whether or not to listen on IPv6 (applied to all vhosts managed by this role).

nginx_vhosts: []

A list of vhost definitions (server blocks) for Nginx virtual hosts. Each entry will create a separate config file named by server_name. If left empty, you will need to supply your own virtual host configuration. See the commented example in defaults/main.yml for available server options. If you have a large number of customizations required for your server definition(s), you're likely better off managing the vhost configuration file yourself, leaving this variable set to [].

nginx_vhosts:
  - listen: "443 ssl http2"
    server_name: "example.com"
    server_name_redirect: "www.example.com"
    root: "/var/www/example.com"
    index: "index.php index.html index.htm"
    error_page: ""
    access_log: ""
    error_log: ""
    state: "present"
    template: "{{ nginx_vhost_template }}"
    filename: "example.com.conf"
    extra_parameters: |
      location ~ \.php$ {
          fastcgi_split_path_info ^(.+\.php)(/.+)$;
          fastcgi_pass unix:/var/run/php5-fpm.sock;
          fastcgi_index index.php;
          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
          include fastcgi_params;
      }
      ssl_certificate     /etc/ssl/certs/ssl-cert-snakeoil.pem;
      ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
      ssl_protocols       TLSv1.1 TLSv1.2;
      ssl_ciphers         HIGH:!aNULL:!MD5;

An example of a fully-populated nginx_vhosts entry, using a | to declare a block of syntax for the extra_parameters.

Please take note of the indentation in the above block. The first line should be a normal 2-space indent. All other lines should be indented normally relative to that line. In the generated file, the entire block will be 4-space indented. This style will ensure the config file is indented correctly.

  - listen: "80"
    server_name: "example.com www.example.com"
    return: "301 https://example.com$request_uri"
    filename: "example.com.80.conf"

An example of a secondary vhost which will redirect to the one shown above.

Note: The filename defaults to the first domain in server_name, if you have two vhosts with the same domain, eg. a redirect, you need to manually set the filename so the second one doesn't override the first one

nginx_remove_default_vhost: false

Whether to remove the 'default' virtualhost configuration supplied by Nginx. Useful if you want the base / URL to be directed at one of your own virtual hosts configured in a separate .conf file.

nginx_upstreams: []

If you are configuring Nginx as a load balancer, you can define one or more upstream sets using this variable. In addition to defining at least one upstream, you would need to configure one of your server blocks to proxy requests through the defined upstream (e.g. proxy_pass http://myapp1;). See the commented example in defaults/main.yml for more information.

nginx_user: "nginx"

The user under which Nginx will run. Defaults to nginx for RedHat, www-data for Debian and www on FreeBSD and OpenBSD.

nginx_worker_processes: "{{ ansible_processor_vcpus|default(ansible_processor_count) }}"
nginx_worker_connections: "1024"
nginx_multi_accept: "off"

nginx_worker_processes should be set to the number of cores present on your machine (if the default is incorrect, find this number with grep processor /proc/cpuinfo | wc -l). nginx_worker_connections is the number of connections per process. Set this higher to handle more simultaneous connections (and remember that a connection will be used for as long as the keepalive timeout duration for every client!). You can set nginx_multi_accept to on if you want Nginx to accept all connections immediately.

nginx_error_log: "/var/log/nginx/error.log warn"
nginx_access_log: "/var/log/nginx/access.log main buffer=16k flush=2m"

Configuration of the default error and access logs. Set to off to disable a log entirely.

nginx_sendfile: "on"
nginx_tcp_nopush: "on"
nginx_tcp_nodelay: "on"

TCP connection options. See this blog post for more information on these directives.

nginx_keepalive_timeout: "65"
nginx_keepalive_requests: "100"

Nginx keepalive settings. Timeout should be set higher (10s+) if you have more polling-style traffic (AJAX-powered sites especially), or lower (<10s) if you have a site where most users visit a few pages and don't send any further requests.

nginx_server_tokens: "on"

Nginx server_tokens settings. Controls whether nginx responds with it's version in HTTP headers. Set to "off" to disable.

nginx_client_max_body_size: "64m"

This value determines the largest file upload possible, as uploads are passed through Nginx before hitting a backend like php-fpm. If you get an error like client intended to send too large body, it means this value is set too low.

nginx_server_names_hash_bucket_size: "64"

If you have many server names, or have very long server names, you might get an Nginx error on startup requiring this value to be increased.

nginx_proxy_cache_path: ""

Set as the proxy_cache_path directive in the nginx.conf file. By default, this will not be configured (if left as an empty string), but if you wish to use Nginx as a reverse proxy, you can set this to a valid value (e.g. "/var/cache/nginx keys_zone=cache:32m") to use Nginx's cache (further proxy configuration can be done in individual server configurations).

nginx_extra_http_options: ""

Extra lines to be inserted in the top-level http block in nginx.conf. The value should be defined literally (as you would insert it directly in the nginx.conf, adhering to the Nginx configuration syntax - such as ; for line termination, etc.), for example:

nginx_extra_http_options: |
  proxy_buffering    off;
  proxy_set_header   X-Real-IP $remote_addr;
  proxy_set_header   X-Scheme $scheme;
  proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header   Host $http_host;

See the template in templates/nginx.conf.j2 for more details on the placement.

nginx_extra_conf_options: ""

Extra lines to be inserted in the top of nginx.conf. The value should be defined literally (as you would insert it directly in the nginx.conf, adhering to the Nginx configuration syntax - such as ; for line termination, etc.), for example:

nginx_extra_conf_options: |
  worker_rlimit_nofile 8192;

See the template in templates/nginx.conf.j2 for more details on the placement.

nginx_log_format: |-
  '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"'

Configures Nginx's log_format. options.

nginx_default_release: ""

(For Debian/Ubuntu only) Allows you to set a different repository for the installation of Nginx. As an example, if you are running Debian's wheezy release, and want to get a newer version of Nginx, you can install the wheezy-backports repository and set that value here, and Ansible will use that as the -t option while installing Nginx.

nginx_ppa_use: false
nginx_ppa_version: stable

(For Ubuntu only) Allows you to use the official Nginx PPA instead of the system's package. You can set the version to stable or development.

nginx_yum_repo_enabled: true

(For RedHat/CentOS only) Set this to false to disable the installation of the nginx yum repository. This could be necessary if you want the default OS stable packages, or if you use Satellite.

nginx_service_state: started
nginx_service_enabled: yes

By default, this role will ensure Nginx is running and enabled at boot after Nginx is configured. You can use these variables to override this behavior if installing in a container or further control over the service state is required.

Overriding configuration templates

If you can't customize via variables because an option isn't exposed, you can override the template used to generate the virtualhost configuration files or the nginx.conf file.

nginx_conf_template: "nginx.conf.j2"
nginx_vhost_template: "vhost.j2"

If necessary you can also set the template on a per vhost basis.

nginx_vhosts:
  - listen: "80 default_server"
    server_name: "site1.example.com"
    root: "/var/www/site1.example.com"
    index: "index.php index.html index.htm"
    template: "{{ playbook_dir }}/templates/site1.example.com.vhost.j2"
  - server_name: "site2.example.com"
    root: "/var/www/site2.example.com"
    index: "index.php index.html index.htm"
    template: "{{ playbook_dir }}/templates/site2.example.com.vhost.j2"

You can either copy and modify the provided template, or extend it with Jinja2 template inheritance and override the specific template block you need to change.

Example: Configure gzip in nginx configuration

Set the nginx_conf_template to point to a template file in your playbook directory.

nginx_conf_template: "{{ playbook_dir }}/templates/nginx.conf.j2"

Create the child template in the path you configured above and extend geerlingguy.nginx template file relative to your playbook.yml.

{% extends 'roles/geerlingguy.nginx/templates/nginx.conf.j2' %}

{% block http_gzip %}
    gzip on;
    gzip_proxied any;
    gzip_static on;
    gzip_http_version 1.0;
    gzip_disable "MSIE [1-6]\.";
    gzip_vary on;
    gzip_comp_level 6;
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/javascript
        application/x-javascript
        application/json
        application/xml
        application/xml+rss
        application/xhtml+xml
        application/x-font-ttf
        application/x-font-opentype
        image/svg+xml
        image/x-icon;
    gzip_buffers 16 8k;
    gzip_min_length 512;
{% endblock %}

Dependencies

None.

Example Playbook

- hosts: server
  roles:
    - { role: geerlingguy.nginx }

Download Details:

Author: Geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-nginx 
License: MIT license

#ansible #role #nginx 

Nigel  Uys

Nigel Uys

1673444520

Ansible-role-jenkins: Ansible Role - Jenkins CI

Ansible Role: Jenkins CI

Installs Jenkins CI on RHEL/CentOS and Debian/Ubuntu servers.

Requirements

Requires curl to be installed on the server. Also, newer versions of Jenkins require Java 8+ (see the test playbooks inside the molecule/default directory for an example of how to use newer versions of Java for your OS).

Role Variables

Available variables are listed below, along with default values (see defaults/main.yml):

jenkins_package_state: present

The state of the jenkins package install. By default this role installs Jenkins but will not upgrade Jenkins (when using package-based installs). If you want to always update to the latest version, change this to latest.

jenkins_hostname: localhost

The system hostname; usually localhost works fine. This will be used during setup to communicate with the running Jenkins instance via HTTP requests.

jenkins_home: /var/lib/jenkins

The Jenkins home directory which, amongst others, is being used for storing artifacts, workspaces and plugins. This variable allows you to override the default /var/lib/jenkins location.

jenkins_http_port: 8080

The HTTP port for Jenkins' web interface.

jenkins_admin_username: admin
jenkins_admin_password: admin

Default admin account credentials which will be created the first time Jenkins is installed.

jenkins_admin_password_file: ""

Default admin password file which will be created the first time Jenkins is installed as /var/lib/jenkins/secrets/initialAdminPassword

jenkins_jar_location: /opt/jenkins-cli.jar

The location at which the jenkins-cli.jar jarfile will be kept. This is used for communicating with Jenkins via the CLI.

jenkins_plugins:
  - blueocean
  - name: influxdb
    version: "1.12.1"

Jenkins plugins to be installed automatically during provisioning. Defaults to empty list ([]). Items can use name or dictionary with name and version keys to pin specific version of a plugin.

jenkins_plugins_install_dependencies: true

Whether Jenkins plugins to be installed should also install any plugin dependencies.

jenkins_plugins_state: present

Use latest to ensure all plugins are running the most up-to-date version. For any plugin that has a specific version set in jenkins_plugins list, state present will be used instead of jenkins_plugins_state value.

jenkins_plugin_updates_expiration: 86400

Number of seconds after which a new copy of the update-center.json file is downloaded. Set it to 0 if no cache file should be used.

jenkins_updates_url: "https://updates.jenkins.io"

The URL to use for Jenkins plugin updates and update-center information.

jenkins_plugin_timeout: 30

The server connection timeout, in seconds, when installing Jenkins plugins.

jenkins_version: "2.346"
jenkins_pkg_url: "http://www.example.com"

(Optional) Then Jenkins version can be pinned to any version available on http://pkg.jenkins-ci.org/debian/ (Debian/Ubuntu) or http://pkg.jenkins-ci.org/redhat/ (RHEL/CentOS). If the Jenkins version you need is not available in the default package URLs, you can override the URL with your own; set jenkins_pkg_url (Note: the role depends on the same naming convention that http://pkg.jenkins-ci.org/ uses).

jenkins_url_prefix: ""

Used for setting a URL prefix for your Jenkins installation. The option is added as --prefix={{ jenkins_url_prefix }} to the Jenkins initialization java invocation, so you can access the installation at a path like http://www.example.com{{ jenkins_url_prefix }}. Make sure you start the prefix with a / (e.g. /jenkins).

jenkins_connection_delay: 5
jenkins_connection_retries: 60

Amount of time and number of times to wait when connecting to Jenkins after initial startup, to verify that Jenkins is running. Total time to wait = delay * retries, so by default this role will wait up to 300 seconds before timing out.

jenkins_prefer_lts: false

By default, this role will install the latest version of Jenkins using the official repositories according to the platform. You can install the current LTS version instead by setting this to false.

The default repositories (listed below) can be overridden as well.

# For RedHat/CentOS:
jenkins_repo_url: https://pkg.jenkins.io/redhat{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.repo
jenkins_repo_key_url: https://pkg.jenkins.io/redhat{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.io.key

# For Debian/Ubuntu:
jenkins_repo_url: deb https://pkg.jenkins.io/debian{{ '-stable' if (jenkins_prefer_lts | bool) else '' }} binary/
jenkins_repo_key_url: https://pkg.jenkins.io/debian{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.io.key

It is also possible to prevent the repo file from being added by setting jenkins_repo_url: ''. This is useful if, for example, you sign your own packages or run internal package management (e.g. Spacewalk).

jenkins_options: ""

Extra options (e.g. setting the HTTP keep alive timeout) to pass to Jenkins on startup via JENKINS_OPTS in the systemd override.conf file can be configured using the var jenkins_options. By default, no options are specified.

jenkins_java_options: "-Djenkins.install.runSetupWizard=false"

Extra Java options for the Jenkins launch command configured via JENKINS_JAVA_OPTS in the systemd override.conf file can be set with the var jenkins_java_options. For example, if you want to configure the timezone Jenkins uses, add -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/New_York. By default, the option to disable the Jenkins 2.0 setup wizard is added.

jenkins_init_changes:
  - option: "JENKINS_OPTS"
    value: "{{ jenkins_options }}"
  - option: "JAVA_OPTS"
    value: "{{ jenkins_java_options }}"
  - option: "JENKINS_HOME"
    value: "{{ jenkins_home }}"
  - option: "JENKINS_PREFIX"
    value: "{{ jenkins_url_prefix }}"
  - option: "JENKINS_PORT"
    value: "{{ jenkins_http_port }}"

Changes made to the Jenkins systemd override.conf file; the default set of changes set the configured URL prefix, Jenkins home directory, Jenkins port and adds the configured Jenkins and Java options for Jenkins' startup. You can add other option/value pairs if you need to set other options for the Jenkins systemd override.conf file.

jenkins_proxy_host: ""
jenkins_proxy_port: ""
jenkins_proxy_noproxy:
  - "127.0.0.1"
  - "localhost"

If you are running Jenkins behind a proxy server, configure these options appropriately. Otherwise Jenkins will be configured with a direct Internet connection.

Dependencies

None.

Example Playbook

- hosts: jenkins
  become: true
  
  vars:
    jenkins_hostname: jenkins.example.com
    java_packages:
      - openjdk-8-jdk

  roles:
    - role: geerlingguy.java
    - role: geerlingguy.jenkins

Note that java_packages may need different versions depending on your distro (e.g. openjdk-11-jdk for Debian 10, or java-1.8.0-openjdk for RHEL 7 or 8).

Download Details:

Author: Geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-jenkins 
License: MIT license

#ansible #jenkins #ci #role 

Fabiola  Auma

Fabiola Auma

1667906100

Ansible Nomad: Ansible Role for Nomad

Ansible-Nomad


This role was previously maintained by Brian Shumate and is now curated by @ansible-community/hashicorp-tools.


This Ansible role performs basic Nomad installation, including filesystem structure, and example configuration.

It will also bootstrap a minimal cluster of 3 server nodes, and can do this in a development environment based on Vagrant and VirtualBox. See README_VAGRANT.md for more details about the Vagrant setup.

Requirements

This role requires an Arch Linux, Debian, RHEL, or Ubuntu distribution; the role is tested with the following specific software versions:

  • Ansible: 2.7.10
  • nomad: 0.12.1
  • Arch Linux
  • CentOS: 7
  • Debian: 8
  • RHEL: 7
  • Ubuntu: 16.04
  • unzip for unarchive module

Role Variables

The role defines most of its variables in defaults/main.yml:

nomad_debug

  • Nomad debug mode
  • Default value: no

nomad_skip_ensure_all_hosts

  • Allow running the role even if not all instances are connected
  • Default value: no

nomad_allow_purge_config

  • Allow purging obsolete configuration files. For example, remove server configuration if instance is no longer a server
  • Default value: no

nomad_version

  • Nomad version to install
  • Default value: 1.1.1

nomad_architecture_map

  • This variable does not need to be changed in most cases
  • Default value: Dictionary translating ansible_architecture to HashiCorp architecture naming convention

nomad_architecture

  • Host architecture
  • Default value: determined by {{ nomad_architecture_map[ansible_architecture] }}

nomad_pkg

  • Nomad package filename
  • Default value: nomad_{{ nomad_version }}_linux_{{ nomad_architecture }}.zip

nomad_zip_url

  • Nomad download URL
  • Default value: https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version }}_linux_{{ nomad_architecture }}.zip

nomad_checksum_file_url

  • Nomad checksum file URL
  • Default value: https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version}}_SHA256SUMS

nomad_bin_dir

  • Nomad binary installation path
  • Default value: /usr/local/bin

nomad_config_dir

  • Nomad configuration file path
  • Default value: /etc/nomad.d

nomad_data_dir

  • Nomad data path
  • Default value: /var/nomad

nomad_lockfile

  • Nomad lockfile path
  • Default value: /var/lock/subsys/nomad

nomad_run_dir

  • Nomad run path
  • Default value: /var/run/nomad

nomad_manage_user

  • Manage Nomad user?
  • Default value: yes

nomad_user

  • Nomad OS username
  • Default value: root

nomad_manage_group

  • Manage Nomad group?
  • Default value: no

nomad_group

  • Nomad OS group
  • Default value: bin

nomad_region

  • Default region
  • Default value: global

nomad_datacenter

  • Nomad datacenter label
  • Default value: dc1

nomad_log_level

  • Logging level
  • Default value: INFO

nomad_syslog_enable

  • Log to syslog
  • Default value: true

nomad_iface

  • Nomad network interface
  • Default value: {{ ansible_default_ipv4.interface }}

nomad_node_name

  • Nomad node name
  • Default value: {{ inventory_hostname_short }}

nomad_node_role

  • Nomad node role
  • options: client, server, both
  • Default value: client

nomad_leave_on_terminate

  • Send leave on termination
  • Default value: yes

nomad_leave_on_interrupt

  • Send leave on interrupt
  • Default value: no

nomad_disable_update_check

  • Disable update check
  • Default value: no

nomad_retry_max

  • Max retry join attempts
  • Default value: 0

nomad_retry_join

  • Enable retry join?
  • Default value: no

nomad_retry_interval

  • Retry join interval
  • Default value: 30s

nomad_rejoin_after_leave

  • Rejoin after leave?
  • Default value: no

nomad_enabled_schedulers

  • List of enabled schedulers
  • Default value: service, batch, system

nomad_num_schedulers

  • Number of schedulers
  • Default value: {{ ansible_processor_vcpus }}

nomad_node_gc_threshold

  • Node garbage collection threshold
  • Default value: 24h

nomad_job_gc_threshold

  • Job garbage collection threshold
  • Default value: 4h

nomad_eval_gc_threshold

  • Eval garbage collection threshold
  • Default value: 1h

nomad_deployment_gc_threshold

  • Deployment garbage collection threshold
  • Default value: 1h

nomad_encrypt_enable

  • Enable Gossip Encryption even if nomad_encrypt is not set
  • Default value: false

nomad_encrypt

  • Set the encryption key; should be the same across a cluster. If not present and nomad_encrypt_enable is true, the key will be generated & retrieved from the bootstrapped server.
  • Default value: ""

nomad_raft_protocol

  • Specifies the version of raft protocal, which used by nomad servers for communication
  • Default value: 2

nomad_authoritative_region

  • Specifies the authoritative region, which provides a single source of truth for global configurations such as ACL Policies and global ACL tokens.
  • Default value: ""

nomad_node_class

  • Nomad node class
  • Default value: ""

nomad_no_host_uuid

  • Force the UUID generated by the client to be randomly generated
  • Default value: no

nomad_max_kill_timeout

  • Max kill timeout
  • Default value: 30s

nomad_network_interface

  • Nomad scheduler will choose from the IPs of this interface for allocating tasks
  • Default value: none

nomad_network_speed

  • Overide network link speed (0 = no overide)
  • Default value: 0

nomad_cpu_total_compute

  • Overide cpu compute (0 = no overide)
  • Default value: 0

nomad_gc_interval

  • Client garbage collection interval
  • Default value: 1m

nomad_gc_disk_usage_threshold

  • Disk usage threshold percentage for garbage collection
  • Default value: 80

nomad_gc_inodes_usage_threshold

  • Inode usage threshold percentage for garbage collection
  • Default value: 70

nomad_gc_parallel_destroys

  • Garbage collection max parallel destroys
  • Default value: 2

nomad_reserved

  • Reserved client resources
  • Default value: cpu: {{ nomad_reserved_cpu }}, memory: {{ nomad_reserved_memory }}, disk: {{ nomad_reserved_disk }}, ports: {{ nomad_reserved_ports }}

nomad_reserved_cpu

  • Reserved client CPU
  • Default value: 0

nomad_reserved_memory

  • Reserved client memory
  • Default value: 0

nomad_reserved_disk

  • Reserved client disk
  • Default value: 0

nomad_reserved_ports

  • Reserved client ports
  • Default value: 22

nomad_host_volumes

  • List host_volume is used to make volumes available to jobs (Stateful Workloads).
  • Default value: []
  • Example:
nomad_host_volumes:
  - name: data
    path: /var/data
    owner: root
    group: bin
    mode: 0755
    read_only: false
  - name: config
    path: /etc/conf
    owner: root
    group: bin
    mode: 0644
    read_only: false

nomad_host_networks

  • List host_network is used to make different networks available to jobs instead of selecting a default interface. This is very useful especially in case of multiple nics.
  • Default value: []
  • Example:
nomad_host_networks:
  - name: public
    cidr: 100.101.102.103/24
    reserved_ports: 22,80
  - name: private
    interface: eth0
    reserved_ports: 443

nomad_options

  • Driver options
  • Key value dict
  • Default value: {}

nomad_chroot_env

  • chroot environment definition for the Exec and Java drivers
  • Key value dict
  • Default value: false

nomad_meta

  • Meta data
  • Key value dict
  • Default value: {}

nomad_bind_address

  • Bind interface address
  • Default value: {{ hostvars[inventory_hostname]['ansible_'+ nomad_iface ]['ipv4']['address'] }}

nomad_advertise_address

  • Network interface address to advertise to other nodes
  • Default value: {{ hostvars[inventory_hostname]['ansible_'+ nomad_iface ]['ipv4']['address'] }}

nomad_ports

  • Ports used by Nomad
  • Default value: http: {{ nomad_ports_http }}, rpc: {{ nomad_ports_rpc }}, serf: {{ nomad_ports_serf }}

nomad_ports_http

  • Http port
  • Default value: 4646

nomad_ports_rpc

  • RPC port
  • Default value: 4647

nomad_ports_serf

  • Serf port
  • Default value: 4648

nomad_podman_enable

  • Installs the podman plugin
  • Default value: false

nomad_cni_enable

  • Installs the cni plugins
  • Default value: false

nomad_docker_enable

  • Install Docker subsystem on nodes?
  • Default value: false

nomad_plugins

  • Allow you configure nomad plugins.
  • Default: {}

Example:

nomad_plugins:
  nomad-driver-podman:
    config:
      volumes:
        enabled: true
        selinuxlabel: z
      recover_stopped: true

nomad_group_name

  • Ansible group that contains all cluster nodes
  • Default value: nomad_instances

nomad_servers

It's typically not necessary to manually alter this list.

  • List of server nodes
  • Default value: List of all nodes in nomad_group_name with nomad_node_role set to server or both

nomad_gather_server_facts

This feature makes it possible to gather the nomad_bind_address and nomad_advertise_address from servers that are currently not targeted by the playbook.

To make this possible the delegate_facts option is used. This option is broken in many Ansible versions, so this feature might not always work.

  • Gather facts from servers that are not currently targeted
  • Default value: 'no'

nomad_use_consul

  • Bootstrap nomad via native consul zero-configuration support assumes consul default ports etc.
  • Default value: False

nomad_consul_address

  • The address of your consul API, use it in combination with nomad_use_consul=True. If you want to use https, use nomad_consul_ssl. Do NOT append https.
  • Default value: localhost:8500

nomad_consul_ssl

  • If true then uses https.
  • Default value: false

nomad_consul_ca_file

  • Public key of consul CA, use in combination with nomad_consul_cert_file and nomad_consul_key_file.
  • Default value: ""

nomad_consul_cert_file

  • The public key which can be used to access consul.
  • Default value: ""

nomad_consul_key_file

  • The private key counterpart of nomad_consul_cert_file.
  • Default value: ""

nomad_consul_servers_service_name

  • The name of the consul service for your nomad servers
  • Default value: nomad-servers

nomad_consul_clients_service_name

  • The name of the consul service for your nomad clients
  • Default value: nomad-clients

nomad_consul_token

  • Token to use for consul interaction
  • Default value: ""

nomad_bootstrap_expect

  • Specifies the number of server nodes to wait for before bootstrapping.
  • Default value: `{{ nomad_servers | count or 3 }}}

nomad_acl_enabled

  • Enable ACLs
  • Default value: no

nomad_acl_token_ttl

  • TTL for tokens
  • Default value: "30s"

nomad_acl_policy_ttl

  • TTL for policies
  • Default value: "30s"

nomad_acl_replication_token

  • Token to use for acl replication on non authoritive servers
  • Default value: ""

nomad_vault_enabled

  • Enable vault
  • Default value: no

nomad_vault_address

  • Vault address to use
  • Default value: {{ vault_address | default('0.0.0.0') }}

nomad_vault_allow_unauthenticated

  • Allow users to use vault without providing their own token
  • Default value: yes

nomad_vault_create_from_role

  • Role to create tokens from
  • Default value: ""

nomad_vault_ca_file

  • Path of CA cert to use with vault
  • Default value: ""

nomad_vault_ca_path

  • Path of a folder containing CA cert(s) to use with vault
  • Default value: ""

nomad_vault_cert_file

  • Path to a certificate to use with vault
  • Default value: ""

nomad_vault_key_file

  • Path to a private key file to use with vault
  • Default value: ""

nomad_vault_tls_server_name

  • Optional string used to set SNI host when connecting to vault
  • Default value: ""

nomad_vault_tls_skip_verify

  • Specifies if SSL peer validation should be enforced
  • Default value: no

nomad_vault_token

  • Vault token used by nomad. Will only be installed on servers.
  • Default value: ""

nomad_vault_namespace

  • Vault namespace used by nomad
  • Default value: ""

nomad_docker_enable

  • Enable docker
  • Default value: no

nomad_docker_dmsetup

  • Run dmsetup on ubuntu (only if docker is enabled)
  • Default value: yes

nomad_tls_enable

  • Enable TLS
  • Default value: false

nomad_tls_copy_keys: false

  • Whether to copy certs from local machine (controller).
  • Default value: false

nomad_tls_files_remote_src

  • Whether to copy certs from remote machine itself.
  • Default value: false

nomad_tls_dir

  • The remote dir where the certs are stored.
  • Default value: /etc/nomad/ssl

nomad_ca_file

  • Use a ca for tls connection, nomad_cert_file and nomad_key_file are needed
  • Default value: ca.cert

nomad_cert_file

  • Use a certificate for tls connection, nomad_ca_file and nomad_key_file are needed
  • Default value: server.crt

nomad_key_file

  • Use a key for tls connection, nomad_cert_file and nomad_key_file are needed
  • Default value: server.key

nomad_rpc_upgrade_mode

  • Use a certificate for tls connection, nomad_ca_file and nomad_key_file are needed, used only when the cluster is being upgraded to TLS, and removed after the migration is complete. This allows the agent to accept both TLS and plaintext traffic.
  • Default value: false

nomad_verify_server_hostname

  • Use a key for tls connection, nomad_cert_file and nomad_key_file are needed. Specifies if outgoing TLS connections should verify the server's hostname.
  • Default value: true

nomad_verify_https_client

  • Use a key for tls connection, nomad_cert_file and nomad_key_file are needed. Specifies agents should require client certificates for all incoming HTTPS requests. The client certificates must be signed by the same CA as Nomad.
  • Default value: true

nomad_telemetry

  • Specifies whether to enable Nomad's telemetry configuration.
  • Default value: false

nomad_telemetry_disable_hostname

  • Specifies if gauge values should be prefixed with the local hostname.
  • Default value: "false"

nomad_telemetry_collection_interval

  • Specifies the time interval at which the Nomad agent collects telemetry data.
  • Default value: "1s"

nomad_telemetry_use_node_name

  • Specifies if gauge values should be prefixed with the name of the node, instead of the hostname. If set it will override disable_hostname value.
  • Default value: "false"

nomad_telemetry_publish_allocation_metrics

  • Specifies if Nomad should publish runtime metrics of allocations.
  • Default value: "false"

nomad_telemetry_publish_node_metrics

  • Specifies if Nomad should publish runtime metrics of nodes.
  • Default value: "false"

nomad_telemetry_backwards_compatible_metrics

  • Specifies if Nomad should publish metrics that are backwards compatible with versions below 0.7, as post version 0.7, Nomad emits tagged metrics. All new metrics will only be added to tagged metrics. Note that this option is used to transition monitoring to tagged metrics and will eventually be deprecated.
  • Default value: "false"

nomad_telemetry_disable_tagged_metrics

  • Specifies if Nomad should not emit tagged metrics and only emit metrics compatible with versions below Nomad 0.7. Note that this option is used to transition monitoring to tagged metrics and will eventually be deprecated.
  • Default value: "false"

nomad_telemetry_filter_default

  • This controls whether to allow metrics that have not been specified by the filter. Defaults to true, which will allow all metrics when no filters are provided. When set to false with no filters, no metrics will be sent.
  • Default value: "true"

nomad_telemetry_prefix_filter

  • This is a list of filter rules to apply for allowing/blocking metrics by prefix. A leading "+" will enable any metrics with the given prefix, and a leading "-" will block them. If there is overlap between two rules, the more specific rule will take precedence. Blocking will take priority if the same prefix is listed multiple times.
  • Default value: []

nomad_telemetry_disable_dispatched_job_summary_metrics

  • Specifies if Nomad should ignore jobs dispatched from a parameterized job when publishing job summary statistics. Since each job has a small memory overhead for tracking summary statistics, it is sometimes desired to trade these statistics for more memory when dispatching high volumes of jobs.
  • Default value: "false"

nomad_telemetry_statsite_address

  • Specifies the address of a statsite server to forward metrics data to.
  • Default value: ""

nomad_telemetry_statsd_address

  • Specifies the address of a statsd server to forward metrics to.
  • Default value: ""

nomad_telemetry_datadog_address

  • Specifies the address of a DataDog statsd server to forward metrics to.
  • Default value: ""

nomad_telemetry_datadog_tags

  • Specifies a list of global tags that will be added to all telemetry packets sent to DogStatsD. It is a list of strings, where each string looks like "my_tag_name:my_tag_value".
  • Default value: []

nomad_telemetry_prometheus_metrics

  • Specifies whether the agent should make Prometheus formatted metrics available at /v1/metrics?format=prometheus.
  • Default value: "false"

nomad_telemetry_circonus_api_token

  • Specifies a valid Circonus API Token used to create/manage check. If provided, metric management is enabled.
  • Default value: ""

nomad_telemetry_circonus_api_app

  • Specifies a valid app name associated with the API token.
  • Default value: "nomad"

nomad_telemetry_circonus_api_url

nomad_telemetry_circonus_submission_interval

  • Specifies the interval at which metrics are submitted to Circonus.
  • Default value: "10s"

nomad_telemetry_circonus_submission_url

  • Specifies the check.config.submission_url field, of a Check API object, from a previously created HTTPTRAP check.
  • Default value: ""

nomad_telemetry_circonus_check_id

  • Specifies the Check ID (not check bundle) from a previously created HTTPTRAP check. The numeric portion of the check._cid field in the Check API object.
  • Default value: ""

nomad_telemetry_circonus_check_force_metric_activation

  • Specifies if force activation of metrics which already exist and are not currently active. If check management is enabled, the default behavior is to add new metrics as they are encountered. If the metric already exists in the check, it will not be activated. This setting overrides that behavior.
  • Default value: "false"

nomad_telemetry_circonus_check_instance_id

  • Serves to uniquely identify the metrics coming from this instance. It can be used to maintain metric continuity with transient or ephemeral instances as they move around within an infrastructure. By default, this is set to hostname:application name (e.g. "host123:nomad").
  • Default value: ""

nomad_telemetry_circonus_check_search_tag

  • Specifies a special tag which, when coupled with the instance id, helps to narrow down the search results when neither a Submission URL or Check ID is provided. By default, this is set to service:app (e.g. "service:nomad").
  • Default value: ""

nomad_telemetry_circonus_check_display_name

  • Specifies a name to give a check when it is created. This name is displayed in the Circonus UI Checks list.
  • Default value: ""

nomad_telemetry_circonus_check_tags

  • Comma separated list of additional tags to add to a check when it is created.
  • Default value: ""

nomad_telemetry_circonus_broker_id

  • Specifies the ID of a specific Circonus Broker to use when creating a new check. The numeric portion of broker._cid field in a Broker API object. If metric management is enabled and neither a Submission URL nor Check ID is provided, an attempt will be made to search for an existing check using Instance ID and Search Tag. If one is not found, a new HTTPTRAP check will be created. By default, this is a random Enterprise Broker is selected, or, the default Circonus Public Broker.
  • Default value: ""

nomad_telemetry_circonus_broker_select_tag

  • Specifies a special tag which will be used to select a Circonus Broker when a Broker ID is not provided. The best use of this is to as a hint for which broker should be used based on where this particular instance is running (e.g. a specific geographic location or datacenter, dc:sfo).
  • Default value: ""

nomad_autopilot

  • Enable Nomad Autopilot
  • To enable Autopilot features (with the exception of dead server cleanup), the raft_protocol setting in the server stanza must be set to 3 on all servers, see parameter nomad_raft_protocol
  • Default value: false

nomad_autopilot_cleanup_dead_servers

  • Specifies automatic removal of dead server nodes periodically and whenever a new server is added to the cluster.
  • Default value: true

nomad_autopilot_last_contact_threshold

  • Specifies the maximum amount of time a server can go without contact from the leader before being considered unhealthy.
  • Default value: 200ms

nomad_autopilot_max_trailing_logs

  • Specifies the maximum number of log entries that a server can trail the leader by before being considered unhealthy.
  • Default value: 250

nomad_autopilot_server_stabilization_time

  • Specifies the minimum amount of time a server must be stable in the 'healthy' state before being added to the cluster. Only takes effect if all servers are running Raft protocol version 3 or higher.
  • Default value: 10s

Custom Configuration Section

As Nomad loads the configuration from files and directories in lexical order, typically merging on top of previously parsed configuration files, you may set custom configurations via nomad_config_custom, which will be expanded into a file named custom.json within your nomad_config_dir which will be loaded after all other configuration by default.

An example usage for enabling vault:

  vars:
    nomad_config_custom:
      vault:
        enabled          : true
        ca_path          : "/etc/certs/ca"
        cert_file        : "/var/certs/vault.crt"
        key_file         : "/var/certs/vault.key"
        address          : "https://vault.service.consul:8200"
        create_from_role : "nomad-cluster"

Dependencies

Ansible requires GNU tar and this role performs some local use of the unarchive module, so ensure that your system has gtar/unzip installed. Jinja2 templates use ipaddr filter that need netaddr python library.

Example Playbook

Basic nomad installation is possible using the included site.yml playbook:

ansible-playbook -i <hosts> site.yml

You can also simply pass variables in using the --extra-vars option to the ansible-playbook command:

ansible-playbook -i hosts site.yml --extra-vars "nomad_datacenter=maui"

Vagrant and VirtualBox

See examples/README_VAGRANT.md for details on quick Vagrant deployments under VirtualBox for testing, etc.

Contributors

Special thanks to the folks listed in CONTRIBUTORS.md for their contributions to this project.

Contributions are welcome, provided that you can agree to the terms outlined in CONTRIBUTING.md


Download Details:

Author: ansible-community
Source Code: https://github.com/ansible-community/ansible-nomad

License: BSD-2-Clause license

#ansible 

Testing Ansible Roles for Multiple Hosts or Clusters with Molecule

In the era of Big Data many popular tools are built to scale out by spliting the workload over multiple hosts. Tools like Hadoop, Spark, Zookeeper, Solr, and MongoDB can be deployed in standalone and clustered modes. Writing and testing Ansible roles for these tools is difficult, especially if you are writing and testing the Ansible roles on your personal desktop or laptop with limited resources.

In order to write and test Ansible roles for multiple hosts/clusters, it is important to use tools that are efficient and automated, otherwise you will spend your time manually creating and destroying your development and test environments. Luckily, the Molecule project for Ansible is the perfect tool for this use case because it automates every part of the development and testing lifecycle and it uses lightweight Docker containers to quickly create and destroy the test environment. Unfortunately, the documentation for Molecule isn’t clear about how you can go about setting up multiple development and test scenarios (standalone & cluster) or how to setup multiple hosts and configure the host groups and host variables.

In this example, I will show how I used Molecule for creating and testing an Ansible Role for Apache Zookeeper that will deploy Zookeeper in either standalone or clustered mode. The full Ansible role is at https://github.com/kevincoakley/ansible-role-zookeeper .


First, I will show how to create one scenario for standalone mode and one scenario for clustered mode. We will keep the default scenario for standalone mode and create a second scenario for clustered mode. Simply copy the molecule/default directory and all of its files to molecule/cluster.

.
└── molecule
    ├── cluster
    │   ├── converge.yml
    │   ├── molecule.yml
    │   └── verify.yml
    ├── default
    │   ├── converge.yml
    │   ├── molecule.yml
    │   └── verify.yml
    └── yaml-lint.yml

Once that is done, edit molecule/cluster/molecule.yml and update the name variable underscenario YAML node to cluster, like so:

scenario:
  name: cluster

That is it! If you want to test the default scenario you can run molecule test -s default, run the cluster scenario with molecule test -s cluster or run both with molecule test --all.

#cluster #ansible #ansible-roles #multi-host #molecule #testing