1667906100
This role was previously maintained by Brian Shumate and is now curated by @ansible-community/hashicorp-tools.
This Ansible role performs basic Nomad installation, including filesystem structure, and example configuration.
It will also bootstrap a minimal cluster of 3 server nodes, and can do this in a development environment based on Vagrant and VirtualBox. See README_VAGRANT.md for more details about the Vagrant setup.
This role requires an Arch Linux, Debian, RHEL, or Ubuntu distribution; the role is tested with the following specific software versions:
The role defines most of its variables in defaults/main.yml
:
nomad_debug
nomad_skip_ensure_all_hosts
nomad_allow_purge_config
nomad_version
nomad_architecture_map
nomad_architecture
{{ nomad_architecture_map[ansible_architecture] }}
nomad_pkg
nomad_{{ nomad_version }}_linux_{{ nomad_architecture }}.zip
nomad_zip_url
https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version }}_linux_{{ nomad_architecture }}.zip
nomad_checksum_file_url
https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version}}_SHA256SUMS
nomad_bin_dir
/usr/local/bin
nomad_config_dir
/etc/nomad.d
nomad_data_dir
/var/nomad
nomad_lockfile
/var/lock/subsys/nomad
nomad_run_dir
/var/run/nomad
nomad_manage_user
nomad_user
nomad_manage_group
nomad_group
nomad_region
nomad_datacenter
nomad_log_level
nomad_syslog_enable
nomad_iface
{{ ansible_default_ipv4.interface }}
nomad_node_name
{{ inventory_hostname_short }}
nomad_node_role
nomad_leave_on_terminate
nomad_leave_on_interrupt
nomad_disable_update_check
nomad_retry_max
nomad_retry_join
nomad_retry_interval
nomad_rejoin_after_leave
nomad_enabled_schedulers
nomad_num_schedulers
{{ ansible_processor_vcpus }}
nomad_node_gc_threshold
nomad_job_gc_threshold
nomad_eval_gc_threshold
nomad_deployment_gc_threshold
nomad_encrypt_enable
nomad_encrypt
is not setnomad_encrypt
nomad_encrypt_enable
is true, the key will be generated & retrieved from the bootstrapped server.nomad_raft_protocol
nomad_authoritative_region
nomad_node_class
nomad_no_host_uuid
nomad_max_kill_timeout
nomad_network_interface
nomad_network_speed
nomad_cpu_total_compute
nomad_gc_interval
nomad_gc_disk_usage_threshold
nomad_gc_inodes_usage_threshold
nomad_gc_parallel_destroys
nomad_reserved
cpu: {{ nomad_reserved_cpu }}, memory: {{ nomad_reserved_memory }}, disk: {{ nomad_reserved_disk }}, ports: {{ nomad_reserved_ports }}
nomad_reserved_cpu
nomad_reserved_memory
nomad_reserved_disk
nomad_reserved_ports
nomad_host_volumes
nomad_host_volumes:
- name: data
path: /var/data
owner: root
group: bin
mode: 0755
read_only: false
- name: config
path: /etc/conf
owner: root
group: bin
mode: 0644
read_only: false
nomad_host_networks
nomad_host_networks:
- name: public
cidr: 100.101.102.103/24
reserved_ports: 22,80
- name: private
interface: eth0
reserved_ports: 443
nomad_options
nomad_chroot_env
nomad_meta
nomad_bind_address
{{ hostvars[inventory_hostname]['ansible_'+ nomad_iface ]['ipv4']['address'] }}
nomad_advertise_address
{{ hostvars[inventory_hostname]['ansible_'+ nomad_iface ]['ipv4']['address'] }}
nomad_ports
http: {{ nomad_ports_http }}, rpc: {{ nomad_ports_rpc }}, serf: {{ nomad_ports_serf }}
nomad_ports_http
nomad_ports_rpc
nomad_ports_serf
nomad_podman_enable
nomad_cni_enable
nomad_docker_enable
nomad_plugins
Example:
nomad_plugins:
nomad-driver-podman:
config:
volumes:
enabled: true
selinuxlabel: z
recover_stopped: true
nomad_group_name
nomad_servers
It's typically not necessary to manually alter this list.
nomad_group_name
with nomad_node_role
set to server or bothnomad_gather_server_facts
This feature makes it possible to gather the nomad_bind_address
and nomad_advertise_address
from servers that are currently not targeted by the playbook.
To make this possible the delegate_facts
option is used. This option is broken in many Ansible versions, so this feature might not always work.
nomad_use_consul
nomad_consul_address
nomad_consul_ssl
. Do NOT append https.nomad_consul_ssl
true
then uses https.nomad_consul_ca_file
nomad_consul_cert_file
and nomad_consul_key_file
.nomad_consul_cert_file
nomad_consul_key_file
nomad_consul_cert_file
.nomad_consul_servers_service_name
nomad_consul_clients_service_name
nomad_consul_token
nomad_bootstrap_expect
nomad_acl_enabled
nomad_acl_token_ttl
nomad_acl_policy_ttl
nomad_acl_replication_token
nomad_vault_enabled
nomad_vault_address
{{ vault_address | default('0.0.0.0') }}
nomad_vault_allow_unauthenticated
nomad_vault_create_from_role
nomad_vault_ca_file
nomad_vault_ca_path
nomad_vault_cert_file
nomad_vault_key_file
nomad_vault_tls_server_name
nomad_vault_tls_skip_verify
nomad_vault_token
nomad_vault_namespace
nomad_docker_enable
nomad_docker_dmsetup
nomad_tls_enable
nomad_tls_copy_keys
: falsenomad_tls_files_remote_src
nomad_tls_dir
/etc/nomad/ssl
nomad_ca_file
nomad_cert_file
nomad_key_file
nomad_rpc_upgrade_mode
nomad_verify_server_hostname
nomad_verify_https_client
nomad_telemetry
nomad_telemetry_disable_hostname
nomad_telemetry_collection_interval
nomad_telemetry_use_node_name
nomad_telemetry_publish_allocation_metrics
nomad_telemetry_publish_node_metrics
nomad_telemetry_backwards_compatible_metrics
nomad_telemetry_disable_tagged_metrics
nomad_telemetry_filter_default
nomad_telemetry_prefix_filter
nomad_telemetry_disable_dispatched_job_summary_metrics
nomad_telemetry_statsite_address
nomad_telemetry_statsd_address
nomad_telemetry_datadog_address
nomad_telemetry_datadog_tags
nomad_telemetry_prometheus_metrics
nomad_telemetry_circonus_api_token
nomad_telemetry_circonus_api_app
nomad_telemetry_circonus_api_url
nomad_telemetry_circonus_submission_interval
nomad_telemetry_circonus_submission_url
nomad_telemetry_circonus_check_id
nomad_telemetry_circonus_check_force_metric_activation
nomad_telemetry_circonus_check_instance_id
nomad_telemetry_circonus_check_search_tag
nomad_telemetry_circonus_check_display_name
nomad_telemetry_circonus_check_tags
nomad_telemetry_circonus_broker_id
nomad_telemetry_circonus_broker_select_tag
nomad_autopilot
nomad_autopilot_cleanup_dead_servers
nomad_autopilot_last_contact_threshold
nomad_autopilot_max_trailing_logs
nomad_autopilot_server_stabilization_time
As Nomad loads the configuration from files and directories in lexical order, typically merging on top of previously parsed configuration files, you may set custom configurations via nomad_config_custom
, which will be expanded into a file named custom.json
within your nomad_config_dir
which will be loaded after all other configuration by default.
An example usage for enabling vault
:
vars:
nomad_config_custom:
vault:
enabled : true
ca_path : "/etc/certs/ca"
cert_file : "/var/certs/vault.crt"
key_file : "/var/certs/vault.key"
address : "https://vault.service.consul:8200"
create_from_role : "nomad-cluster"
Ansible requires GNU tar and this role performs some local use of the unarchive module, so ensure that your system has gtar
/unzip
installed. Jinja2 templates use ipaddr filter that need netaddr
python library.
Basic nomad installation is possible using the included site.yml
playbook:
ansible-playbook -i <hosts> site.yml
You can also simply pass variables in using the --extra-vars
option to the ansible-playbook
command:
ansible-playbook -i hosts site.yml --extra-vars "nomad_datacenter=maui"
See examples/README_VAGRANT.md
for details on quick Vagrant deployments under VirtualBox for testing, etc.
Special thanks to the folks listed in CONTRIBUTORS.md for their contributions to this project.
Contributions are welcome, provided that you can agree to the terms outlined in CONTRIBUTING.md
Author: ansible-community
Source Code: https://github.com/ansible-community/ansible-nomad
License: BSD-2-Clause license
1667747700
A collaborative curated list of awesome Ansible resources, tools, Roles, tutorials and other related stuff.
Ansible is an open source toolkit, written in Python, it is used for configuration management, application deployment, continuous delivery, IT infrastructure automation and automation in general.
Official resources by and for Ansible.
Places where to chat with the Ansible community
Tutorials and courses to learn Ansible.
Books about Ansible.
Video tutorials and Ansible training.
Tools for and using Ansible.
Best practices and other opinions on Ansible.
Awesome production ready Playbooks, Roles and Collections to get you up and running.
Author: ansible-community
Source Code: https://github.com/ansible-community/awesome-ansible
License: CC0-1.0 license
1673452680
Note: Please consider using the official NGINX Ansible role from NGINX, Inc.
Installs Nginx on RedHat/CentOS, Debian/Ubuntu, Archlinux, FreeBSD or OpenBSD servers.
This role installs and configures the latest version of Nginx from the Nginx yum repository (on RedHat-based systems), apt (on Debian-based systems), pacman (Archlinux), pkgng (on FreeBSD systems) or pkg_add (on OpenBSD systems). You will likely need to do extra setup work after this role has installed Nginx, like adding your own [virtualhost].conf file inside /etc/nginx/conf.d/
, describing the location and options to use for your particular website.
None.
Available variables are listed below, along with default values (see defaults/main.yml
):
nginx_listen_ipv6: true
Whether or not to listen on IPv6 (applied to all vhosts managed by this role).
nginx_vhosts: []
A list of vhost definitions (server blocks) for Nginx virtual hosts. Each entry will create a separate config file named by server_name
. If left empty, you will need to supply your own virtual host configuration. See the commented example in defaults/main.yml
for available server options. If you have a large number of customizations required for your server definition(s), you're likely better off managing the vhost configuration file yourself, leaving this variable set to []
.
nginx_vhosts:
- listen: "443 ssl http2"
server_name: "example.com"
server_name_redirect: "www.example.com"
root: "/var/www/example.com"
index: "index.php index.html index.htm"
error_page: ""
access_log: ""
error_log: ""
state: "present"
template: "{{ nginx_vhost_template }}"
filename: "example.com.conf"
extra_parameters: |
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
An example of a fully-populated nginx_vhosts entry, using a |
to declare a block of syntax for the extra_parameters
.
Please take note of the indentation in the above block. The first line should be a normal 2-space indent. All other lines should be indented normally relative to that line. In the generated file, the entire block will be 4-space indented. This style will ensure the config file is indented correctly.
- listen: "80"
server_name: "example.com www.example.com"
return: "301 https://example.com$request_uri"
filename: "example.com.80.conf"
An example of a secondary vhost which will redirect to the one shown above.
Note: The filename
defaults to the first domain in server_name
, if you have two vhosts with the same domain, eg. a redirect, you need to manually set the filename
so the second one doesn't override the first one
nginx_remove_default_vhost: false
Whether to remove the 'default' virtualhost configuration supplied by Nginx. Useful if you want the base /
URL to be directed at one of your own virtual hosts configured in a separate .conf file.
nginx_upstreams: []
If you are configuring Nginx as a load balancer, you can define one or more upstream sets using this variable. In addition to defining at least one upstream, you would need to configure one of your server blocks to proxy requests through the defined upstream (e.g. proxy_pass http://myapp1;
). See the commented example in defaults/main.yml
for more information.
nginx_user: "nginx"
The user under which Nginx will run. Defaults to nginx
for RedHat, www-data
for Debian and www
on FreeBSD and OpenBSD.
nginx_worker_processes: "{{ ansible_processor_vcpus|default(ansible_processor_count) }}"
nginx_worker_connections: "1024"
nginx_multi_accept: "off"
nginx_worker_processes
should be set to the number of cores present on your machine (if the default is incorrect, find this number with grep processor /proc/cpuinfo | wc -l
). nginx_worker_connections
is the number of connections per process. Set this higher to handle more simultaneous connections (and remember that a connection will be used for as long as the keepalive timeout duration for every client!). You can set nginx_multi_accept
to on
if you want Nginx to accept all connections immediately.
nginx_error_log: "/var/log/nginx/error.log warn"
nginx_access_log: "/var/log/nginx/access.log main buffer=16k flush=2m"
Configuration of the default error and access logs. Set to off
to disable a log entirely.
nginx_sendfile: "on"
nginx_tcp_nopush: "on"
nginx_tcp_nodelay: "on"
TCP connection options. See this blog post for more information on these directives.
nginx_keepalive_timeout: "65"
nginx_keepalive_requests: "100"
Nginx keepalive settings. Timeout should be set higher (10s+) if you have more polling-style traffic (AJAX-powered sites especially), or lower (<10s) if you have a site where most users visit a few pages and don't send any further requests.
nginx_server_tokens: "on"
Nginx server_tokens settings. Controls whether nginx responds with it's version in HTTP headers. Set to "off"
to disable.
nginx_client_max_body_size: "64m"
This value determines the largest file upload possible, as uploads are passed through Nginx before hitting a backend like php-fpm
. If you get an error like client intended to send too large body
, it means this value is set too low.
nginx_server_names_hash_bucket_size: "64"
If you have many server names, or have very long server names, you might get an Nginx error on startup requiring this value to be increased.
nginx_proxy_cache_path: ""
Set as the proxy_cache_path
directive in the nginx.conf
file. By default, this will not be configured (if left as an empty string), but if you wish to use Nginx as a reverse proxy, you can set this to a valid value (e.g. "/var/cache/nginx keys_zone=cache:32m"
) to use Nginx's cache (further proxy configuration can be done in individual server configurations).
nginx_extra_http_options: ""
Extra lines to be inserted in the top-level http
block in nginx.conf
. The value should be defined literally (as you would insert it directly in the nginx.conf
, adhering to the Nginx configuration syntax - such as ;
for line termination, etc.), for example:
nginx_extra_http_options: |
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
See the template in templates/nginx.conf.j2
for more details on the placement.
nginx_extra_conf_options: ""
Extra lines to be inserted in the top of nginx.conf
. The value should be defined literally (as you would insert it directly in the nginx.conf
, adhering to the Nginx configuration syntax - such as ;
for line termination, etc.), for example:
nginx_extra_conf_options: |
worker_rlimit_nofile 8192;
See the template in templates/nginx.conf.j2
for more details on the placement.
nginx_log_format: |-
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'
Configures Nginx's log_format
. options.
nginx_default_release: ""
(For Debian/Ubuntu only) Allows you to set a different repository for the installation of Nginx. As an example, if you are running Debian's wheezy release, and want to get a newer version of Nginx, you can install the wheezy-backports
repository and set that value here, and Ansible will use that as the -t
option while installing Nginx.
nginx_ppa_use: false
nginx_ppa_version: stable
(For Ubuntu only) Allows you to use the official Nginx PPA instead of the system's package. You can set the version to stable
or development
.
nginx_yum_repo_enabled: true
(For RedHat/CentOS only) Set this to false
to disable the installation of the nginx
yum repository. This could be necessary if you want the default OS stable packages, or if you use Satellite.
nginx_service_state: started
nginx_service_enabled: yes
By default, this role will ensure Nginx is running and enabled at boot after Nginx is configured. You can use these variables to override this behavior if installing in a container or further control over the service state is required.
If you can't customize via variables because an option isn't exposed, you can override the template used to generate the virtualhost configuration files or the nginx.conf
file.
nginx_conf_template: "nginx.conf.j2"
nginx_vhost_template: "vhost.j2"
If necessary you can also set the template on a per vhost basis.
nginx_vhosts:
- listen: "80 default_server"
server_name: "site1.example.com"
root: "/var/www/site1.example.com"
index: "index.php index.html index.htm"
template: "{{ playbook_dir }}/templates/site1.example.com.vhost.j2"
- server_name: "site2.example.com"
root: "/var/www/site2.example.com"
index: "index.php index.html index.htm"
template: "{{ playbook_dir }}/templates/site2.example.com.vhost.j2"
You can either copy and modify the provided template, or extend it with Jinja2 template inheritance and override the specific template block you need to change.
Set the nginx_conf_template
to point to a template file in your playbook directory.
nginx_conf_template: "{{ playbook_dir }}/templates/nginx.conf.j2"
Create the child template in the path you configured above and extend geerlingguy.nginx
template file relative to your playbook.yml
.
{% extends 'roles/geerlingguy.nginx/templates/nginx.conf.j2' %}
{% block http_gzip %}
gzip on;
gzip_proxied any;
gzip_static on;
gzip_http_version 1.0;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss
application/xhtml+xml
application/x-font-ttf
application/x-font-opentype
image/svg+xml
image/x-icon;
gzip_buffers 16 8k;
gzip_min_length 512;
{% endblock %}
None.
- hosts: server
roles:
- { role: geerlingguy.nginx }
Author: Geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-nginx
License: MIT license
1673444520
Installs Jenkins CI on RHEL/CentOS and Debian/Ubuntu servers.
Requires curl
to be installed on the server. Also, newer versions of Jenkins require Java 8+ (see the test playbooks inside the molecule/default
directory for an example of how to use newer versions of Java for your OS).
Available variables are listed below, along with default values (see defaults/main.yml
):
jenkins_package_state: present
The state of the jenkins
package install. By default this role installs Jenkins but will not upgrade Jenkins (when using package-based installs). If you want to always update to the latest version, change this to latest
.
jenkins_hostname: localhost
The system hostname; usually localhost
works fine. This will be used during setup to communicate with the running Jenkins instance via HTTP requests.
jenkins_home: /var/lib/jenkins
The Jenkins home directory which, amongst others, is being used for storing artifacts, workspaces and plugins. This variable allows you to override the default /var/lib/jenkins
location.
jenkins_http_port: 8080
The HTTP port for Jenkins' web interface.
jenkins_admin_username: admin
jenkins_admin_password: admin
Default admin account credentials which will be created the first time Jenkins is installed.
jenkins_admin_password_file: ""
Default admin password file which will be created the first time Jenkins is installed as /var/lib/jenkins/secrets/initialAdminPassword
jenkins_jar_location: /opt/jenkins-cli.jar
The location at which the jenkins-cli.jar
jarfile will be kept. This is used for communicating with Jenkins via the CLI.
jenkins_plugins:
- blueocean
- name: influxdb
version: "1.12.1"
Jenkins plugins to be installed automatically during provisioning. Defaults to empty list ([]
). Items can use name or dictionary with name
and version
keys to pin specific version of a plugin.
jenkins_plugins_install_dependencies: true
Whether Jenkins plugins to be installed should also install any plugin dependencies.
jenkins_plugins_state: present
Use latest
to ensure all plugins are running the most up-to-date version. For any plugin that has a specific version set in jenkins_plugins
list, state present
will be used instead of jenkins_plugins_state
value.
jenkins_plugin_updates_expiration: 86400
Number of seconds after which a new copy of the update-center.json file is downloaded. Set it to 0 if no cache file should be used.
jenkins_updates_url: "https://updates.jenkins.io"
The URL to use for Jenkins plugin updates and update-center information.
jenkins_plugin_timeout: 30
The server connection timeout, in seconds, when installing Jenkins plugins.
jenkins_version: "2.346"
jenkins_pkg_url: "http://www.example.com"
(Optional) Then Jenkins version can be pinned to any version available on http://pkg.jenkins-ci.org/debian/
(Debian/Ubuntu) or http://pkg.jenkins-ci.org/redhat/
(RHEL/CentOS). If the Jenkins version you need is not available in the default package URLs, you can override the URL with your own; set jenkins_pkg_url
(Note: the role depends on the same naming convention that http://pkg.jenkins-ci.org/
uses).
jenkins_url_prefix: ""
Used for setting a URL prefix for your Jenkins installation. The option is added as --prefix={{ jenkins_url_prefix }}
to the Jenkins initialization java
invocation, so you can access the installation at a path like http://www.example.com{{ jenkins_url_prefix }}
. Make sure you start the prefix with a /
(e.g. /jenkins
).
jenkins_connection_delay: 5
jenkins_connection_retries: 60
Amount of time and number of times to wait when connecting to Jenkins after initial startup, to verify that Jenkins is running. Total time to wait = delay
* retries
, so by default this role will wait up to 300 seconds before timing out.
jenkins_prefer_lts: false
By default, this role will install the latest version of Jenkins using the official repositories according to the platform. You can install the current LTS version instead by setting this to false
.
The default repositories (listed below) can be overridden as well.
# For RedHat/CentOS:
jenkins_repo_url: https://pkg.jenkins.io/redhat{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.repo
jenkins_repo_key_url: https://pkg.jenkins.io/redhat{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.io.key
# For Debian/Ubuntu:
jenkins_repo_url: deb https://pkg.jenkins.io/debian{{ '-stable' if (jenkins_prefer_lts | bool) else '' }} binary/
jenkins_repo_key_url: https://pkg.jenkins.io/debian{{ '-stable' if (jenkins_prefer_lts | bool) else '' }}/jenkins.io.key
It is also possible to prevent the repo file from being added by setting jenkins_repo_url: ''
. This is useful if, for example, you sign your own packages or run internal package management (e.g. Spacewalk).
jenkins_options: ""
Extra options (e.g. setting the HTTP keep alive timeout) to pass to Jenkins on startup via JENKINS_OPTS
in the systemd override.conf file can be configured using the var jenkins_options
. By default, no options are specified.
jenkins_java_options: "-Djenkins.install.runSetupWizard=false"
Extra Java options for the Jenkins launch command configured via JENKINS_JAVA_OPTS
in the systemd override.conf file can be set with the var jenkins_java_options
. For example, if you want to configure the timezone Jenkins uses, add -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/New_York
. By default, the option to disable the Jenkins 2.0 setup wizard is added.
jenkins_init_changes:
- option: "JENKINS_OPTS"
value: "{{ jenkins_options }}"
- option: "JAVA_OPTS"
value: "{{ jenkins_java_options }}"
- option: "JENKINS_HOME"
value: "{{ jenkins_home }}"
- option: "JENKINS_PREFIX"
value: "{{ jenkins_url_prefix }}"
- option: "JENKINS_PORT"
value: "{{ jenkins_http_port }}"
Changes made to the Jenkins systemd override.conf file; the default set of changes set the configured URL prefix, Jenkins home directory, Jenkins port and adds the configured Jenkins and Java options for Jenkins' startup. You can add other option/value pairs if you need to set other options for the Jenkins systemd override.conf file.
jenkins_proxy_host: ""
jenkins_proxy_port: ""
jenkins_proxy_noproxy:
- "127.0.0.1"
- "localhost"
If you are running Jenkins behind a proxy server, configure these options appropriately. Otherwise Jenkins will be configured with a direct Internet connection.
None.
- hosts: jenkins
become: true
vars:
jenkins_hostname: jenkins.example.com
java_packages:
- openjdk-8-jdk
roles:
- role: geerlingguy.java
- role: geerlingguy.jenkins
Note that java_packages
may need different versions depending on your distro (e.g. openjdk-11-jdk
for Debian 10, or java-1.8.0-openjdk
for RHEL 7 or 8).
Author: Geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-jenkins
License: MIT license
1667906100
This role was previously maintained by Brian Shumate and is now curated by @ansible-community/hashicorp-tools.
This Ansible role performs basic Nomad installation, including filesystem structure, and example configuration.
It will also bootstrap a minimal cluster of 3 server nodes, and can do this in a development environment based on Vagrant and VirtualBox. See README_VAGRANT.md for more details about the Vagrant setup.
This role requires an Arch Linux, Debian, RHEL, or Ubuntu distribution; the role is tested with the following specific software versions:
The role defines most of its variables in defaults/main.yml
:
nomad_debug
nomad_skip_ensure_all_hosts
nomad_allow_purge_config
nomad_version
nomad_architecture_map
nomad_architecture
{{ nomad_architecture_map[ansible_architecture] }}
nomad_pkg
nomad_{{ nomad_version }}_linux_{{ nomad_architecture }}.zip
nomad_zip_url
https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version }}_linux_{{ nomad_architecture }}.zip
nomad_checksum_file_url
https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version}}_SHA256SUMS
nomad_bin_dir
/usr/local/bin
nomad_config_dir
/etc/nomad.d
nomad_data_dir
/var/nomad
nomad_lockfile
/var/lock/subsys/nomad
nomad_run_dir
/var/run/nomad
nomad_manage_user
nomad_user
nomad_manage_group
nomad_group
nomad_region
nomad_datacenter
nomad_log_level
nomad_syslog_enable
nomad_iface
{{ ansible_default_ipv4.interface }}
nomad_node_name
{{ inventory_hostname_short }}
nomad_node_role
nomad_leave_on_terminate
nomad_leave_on_interrupt
nomad_disable_update_check
nomad_retry_max
nomad_retry_join
nomad_retry_interval
nomad_rejoin_after_leave
nomad_enabled_schedulers
nomad_num_schedulers
{{ ansible_processor_vcpus }}
nomad_node_gc_threshold
nomad_job_gc_threshold
nomad_eval_gc_threshold
nomad_deployment_gc_threshold
nomad_encrypt_enable
nomad_encrypt
is not setnomad_encrypt
nomad_encrypt_enable
is true, the key will be generated & retrieved from the bootstrapped server.nomad_raft_protocol
nomad_authoritative_region
nomad_node_class
nomad_no_host_uuid
nomad_max_kill_timeout
nomad_network_interface
nomad_network_speed
nomad_cpu_total_compute
nomad_gc_interval
nomad_gc_disk_usage_threshold
nomad_gc_inodes_usage_threshold
nomad_gc_parallel_destroys
nomad_reserved
cpu: {{ nomad_reserved_cpu }}, memory: {{ nomad_reserved_memory }}, disk: {{ nomad_reserved_disk }}, ports: {{ nomad_reserved_ports }}
nomad_reserved_cpu
nomad_reserved_memory
nomad_reserved_disk
nomad_reserved_ports
nomad_host_volumes
nomad_host_volumes:
- name: data
path: /var/data
owner: root
group: bin
mode: 0755
read_only: false
- name: config
path: /etc/conf
owner: root
group: bin
mode: 0644
read_only: false
nomad_host_networks
nomad_host_networks:
- name: public
cidr: 100.101.102.103/24
reserved_ports: 22,80
- name: private
interface: eth0
reserved_ports: 443
nomad_options
nomad_chroot_env
nomad_meta
nomad_bind_address
{{ hostvars[inventory_hostname]['ansible_'+ nomad_iface ]['ipv4']['address'] }}
nomad_advertise_address
{{ hostvars[inventory_hostname]['ansible_'+ nomad_iface ]['ipv4']['address'] }}
nomad_ports
http: {{ nomad_ports_http }}, rpc: {{ nomad_ports_rpc }}, serf: {{ nomad_ports_serf }}
nomad_ports_http
nomad_ports_rpc
nomad_ports_serf
nomad_podman_enable
nomad_cni_enable
nomad_docker_enable
nomad_plugins
Example:
nomad_plugins:
nomad-driver-podman:
config:
volumes:
enabled: true
selinuxlabel: z
recover_stopped: true
nomad_group_name
nomad_servers
It's typically not necessary to manually alter this list.
nomad_group_name
with nomad_node_role
set to server or bothnomad_gather_server_facts
This feature makes it possible to gather the nomad_bind_address
and nomad_advertise_address
from servers that are currently not targeted by the playbook.
To make this possible the delegate_facts
option is used. This option is broken in many Ansible versions, so this feature might not always work.
nomad_use_consul
nomad_consul_address
nomad_consul_ssl
. Do NOT append https.nomad_consul_ssl
true
then uses https.nomad_consul_ca_file
nomad_consul_cert_file
and nomad_consul_key_file
.nomad_consul_cert_file
nomad_consul_key_file
nomad_consul_cert_file
.nomad_consul_servers_service_name
nomad_consul_clients_service_name
nomad_consul_token
nomad_bootstrap_expect
nomad_acl_enabled
nomad_acl_token_ttl
nomad_acl_policy_ttl
nomad_acl_replication_token
nomad_vault_enabled
nomad_vault_address
{{ vault_address | default('0.0.0.0') }}
nomad_vault_allow_unauthenticated
nomad_vault_create_from_role
nomad_vault_ca_file
nomad_vault_ca_path
nomad_vault_cert_file
nomad_vault_key_file
nomad_vault_tls_server_name
nomad_vault_tls_skip_verify
nomad_vault_token
nomad_vault_namespace
nomad_docker_enable
nomad_docker_dmsetup
nomad_tls_enable
nomad_tls_copy_keys
: falsenomad_tls_files_remote_src
nomad_tls_dir
/etc/nomad/ssl
nomad_ca_file
nomad_cert_file
nomad_key_file
nomad_rpc_upgrade_mode
nomad_verify_server_hostname
nomad_verify_https_client
nomad_telemetry
nomad_telemetry_disable_hostname
nomad_telemetry_collection_interval
nomad_telemetry_use_node_name
nomad_telemetry_publish_allocation_metrics
nomad_telemetry_publish_node_metrics
nomad_telemetry_backwards_compatible_metrics
nomad_telemetry_disable_tagged_metrics
nomad_telemetry_filter_default
nomad_telemetry_prefix_filter
nomad_telemetry_disable_dispatched_job_summary_metrics
nomad_telemetry_statsite_address
nomad_telemetry_statsd_address
nomad_telemetry_datadog_address
nomad_telemetry_datadog_tags
nomad_telemetry_prometheus_metrics
nomad_telemetry_circonus_api_token
nomad_telemetry_circonus_api_app
nomad_telemetry_circonus_api_url
nomad_telemetry_circonus_submission_interval
nomad_telemetry_circonus_submission_url
nomad_telemetry_circonus_check_id
nomad_telemetry_circonus_check_force_metric_activation
nomad_telemetry_circonus_check_instance_id
nomad_telemetry_circonus_check_search_tag
nomad_telemetry_circonus_check_display_name
nomad_telemetry_circonus_check_tags
nomad_telemetry_circonus_broker_id
nomad_telemetry_circonus_broker_select_tag
nomad_autopilot
nomad_autopilot_cleanup_dead_servers
nomad_autopilot_last_contact_threshold
nomad_autopilot_max_trailing_logs
nomad_autopilot_server_stabilization_time
As Nomad loads the configuration from files and directories in lexical order, typically merging on top of previously parsed configuration files, you may set custom configurations via nomad_config_custom
, which will be expanded into a file named custom.json
within your nomad_config_dir
which will be loaded after all other configuration by default.
An example usage for enabling vault
:
vars:
nomad_config_custom:
vault:
enabled : true
ca_path : "/etc/certs/ca"
cert_file : "/var/certs/vault.crt"
key_file : "/var/certs/vault.key"
address : "https://vault.service.consul:8200"
create_from_role : "nomad-cluster"
Ansible requires GNU tar and this role performs some local use of the unarchive module, so ensure that your system has gtar
/unzip
installed. Jinja2 templates use ipaddr filter that need netaddr
python library.
Basic nomad installation is possible using the included site.yml
playbook:
ansible-playbook -i <hosts> site.yml
You can also simply pass variables in using the --extra-vars
option to the ansible-playbook
command:
ansible-playbook -i hosts site.yml --extra-vars "nomad_datacenter=maui"
See examples/README_VAGRANT.md
for details on quick Vagrant deployments under VirtualBox for testing, etc.
Special thanks to the folks listed in CONTRIBUTORS.md for their contributions to this project.
Contributions are welcome, provided that you can agree to the terms outlined in CONTRIBUTING.md
Author: ansible-community
Source Code: https://github.com/ansible-community/ansible-nomad
License: BSD-2-Clause license
1594109880
In the era of Big Data many popular tools are built to scale out by spliting the workload over multiple hosts. Tools like Hadoop, Spark, Zookeeper, Solr, and MongoDB can be deployed in standalone and clustered modes. Writing and testing Ansible roles for these tools is difficult, especially if you are writing and testing the Ansible roles on your personal desktop or laptop with limited resources.
In order to write and test Ansible roles for multiple hosts/clusters, it is important to use tools that are efficient and automated, otherwise you will spend your time manually creating and destroying your development and test environments. Luckily, the Molecule project for Ansible is the perfect tool for this use case because it automates every part of the development and testing lifecycle and it uses lightweight Docker containers to quickly create and destroy the test environment. Unfortunately, the documentation for Molecule isn’t clear about how you can go about setting up multiple development and test scenarios (standalone & cluster) or how to setup multiple hosts and configure the host groups and host variables.
In this example, I will show how I used Molecule for creating and testing an Ansible Role for Apache Zookeeper that will deploy Zookeeper in either standalone or clustered mode. The full Ansible role is at https://github.com/kevincoakley/ansible-role-zookeeper .
First, I will show how to create one scenario for standalone mode and one scenario for clustered mode. We will keep the default scenario for standalone mode and create a second scenario for clustered mode. Simply copy the molecule/default
directory and all of its files to molecule/cluster
.
.
└── molecule
├── cluster
│ ├── converge.yml
│ ├── molecule.yml
│ └── verify.yml
├── default
│ ├── converge.yml
│ ├── molecule.yml
│ └── verify.yml
└── yaml-lint.yml
Once that is done, edit molecule/cluster/molecule.yml
and update the name
variable underscenario
YAML node to cluster
, like so:
scenario:
name: cluster
That is it! If you want to test the default scenario you can run molecule test -s default
, run the cluster scenario with molecule test -s cluster
or run both with molecule test --all
.
#cluster #ansible #ansible-roles #multi-host #molecule #testing