Felix Kling

Felix Kling

1669435423

How to Use Ansible with AWS (Amazon Web Services)

Learn how to use Ansible with AWS (Amazon Web Services). Using the power of Ansible to create infrastructure in GCP.

In this course i have used cloud shell to demonstrate all my example .

To get the best out of this course you need to have

   1.A basic introduction to AWS.

   2. A IAM User and key and password credentials

   3. AWS  Console along with cloud shell  (if you do not wish to use cloud shell you can use any linux distro with gcloud cli installed )

   4. An basic understanding of Ansible

What is Ansible ?

  •    Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.

In this course we will be using the power of Ansible to create infrastructure in GCP 

I will be covering the following topics in this course

   1. A brief introduction to Ansible

   2. Create an instance using AWS module

   3. Create a simple Architecture using various aspects of ansible

   4. Inventory system in AWS .Static inventory file as well as dynamic inventory file

   5. Conditionals  the When statement

   6. Gather_facts and inventory file

What you’ll learn

  •        A brief introduction to ansible
  •        AWS + Ansible
  •        Inventory files in Ansible
  •        Roles in Ansible

Are there any course requirements or prerequisites?

  •        Basic information of AWS

Who this course is for:

  •        Devops

#ansible #aws #devops #amazonwebservices

How to Use Ansible with AWS (Amazon Web Services)
Daisy Rees

Daisy Rees

1668584957

Home Lab with Oracle Cloud and Ansible

Learn how to provision a home lab with Oracle Cloud and Ansible. How you can do a full infrastructure provisioning of a pair of web servers on a Cloud provider, with SSL certificates and monitoring metrics with Prometheus.

Imagine for a moment that you been working hard to setup a website, protected with SSL, and then your hardware fails. This means that unless you have a perfect backup of your machine, you will need to install all the software and configuration files by hand.

What if it's not just one server but many? The amount of time you will need to fix all of them will grow exponentially – and because is a manual process it will be more error-prone.

And then the nightmare scenario: You don't have an up-to-date backup, or you have incomplete backups. Or the worst – there are no backups at all. This last case is more common than you think, especially in home labs where you are tinkering and playing around with stuff by yourself.

In this tutorial, I'll show you how you can do a full infrastructure provisioning of a pair of web servers on a Cloud provider, with SSL certificates and monitoring metrics with Prometheus.

What You Need for This Setup

The first thing you need is a cloud provider. Oracle Cloud offers a Free Tier version of their cloud services, which allows you to setup virtual machines for free. This is great for a home lab with lots of rich features that you can use to try new tools and techniques.

You'll also need an automation tool. I used Ansible because its doesn't have many requirements (you only need an SSH daemon and public key authentication to get things going). I also like it because it works equally well regardless of the cloud environment you are trying to provision.

In this tutorial we will use the Open Source version of this tool, as it is more than sufficient for our purposes.

What's included in the Ansible playbook

An Ansible playbook is nothing more than a set of instructions you define to execute tasks that will change the status of a host. These actions are carried out on an inventory of hosts you define.

Here, you are going to learn about the following:

  • How to clean inventory sources by using the proper layout in your playbooks.
  • How to provision two NGINX instances, with the request of their proper free SSL certificates using Certbot.
  • How to set up the local Linux firewalls and add a Prometheus node_exporter agent and one scraper to collect that data.
  • Concepts like variables, roles (with task inclusion), and conditional execution.
  • Important techniques like task tagging, debug messages, and static validation with ansible-lint.

All the code can be found in this GitHub repository.

What You Should Know Before Trying This

Because we will cover several tasks here, you will probably need to be familiar with several things (I'll provide links as we go along):

What is not included here

OCI Cloud has a complete REST API to manage a lot of aspects of their cloud environment. Their setup page (specifically the SDK) is also very detailed.

You'll Probably Do Things Differently in Production.

Installing the OCI-Metrics-datasource instead of Prometheus agents on a virtual machine

You can go to this page to install it on your Grafana instance (Bare metal or Cloud). Also you need to setup your credentials and permissions as explained here.

This is probably the most efficient way to monitor your resources as you do not need to run agents on your virtual machines. But I will install instead a Prometheus node_exporter agent and scraper that will be visible from a Grafana Cloud instance.

An exposed Prometheus on the Internet endpoint is not a good idea

It is very clear, I'm exposing my Prometheus scraper to the Internet so Grafana cloud can reach it. On an Intranet with a private cloud and your local Grafana, this is not an issue – but here, a Prometheus agent pushing data to Grafana would be a better option.

Still, Grafana provides a list of public IP addresses that you can use to setup your allow list.

So the following will work:

oracle_cloud_ingress_rules

Oracle Cloud Ingress Rules

But it is not the best. Instead, you want to restrict the specific IP addresses that can pull data from your exposed services. The prometheus exporter can be completely hidden from Grafana on port 9100. Instead we only need to expose the Prometheus scraper that listens on port 9000.

For this home lab, it is not a big deal having such services fully exposed. But if you have a server with sensitive data, you must restrict who can reach the service!

An alternative to the Prometheus endpoint is to push the data to Grafana by using a Grafana agent but I will not cover that option here.

Playbook Analysis

Ansible lets you have a single file with the playbook instructions, but eventually you will find that such a structure is difficult to maintain.

For my playbook I decided to keep the suggested structure:

tree -A 
.
├── inventory
│   └── cloud.yaml
├── oracle.yaml
├── roles
│   └── oracle
│       ├── files
│       │   ├── logrotate_prometheus-node-exporter
│       │   ├── prometheus-node-exporter
│       │   └── requirements_certboot.txt
│       ├── handlers
│       │   └── main.yaml
│       ├── meta
│       ├── tasks
│       │   ├── controller.yaml
│       │   ├── main.yaml
│       │   ├── metrics.yaml
│       │   └── nginx.yaml
│       ├── templates
│       │   ├── prometheus-node-exporter.service
│       │   ├── prometheus.service
│       │   └── prometheus.yaml
│       └── vars
│           └── main.yaml
└── site.yaml

Below is a brief description of how the content is organized:

  1. You can have more than one site. You control that inside the site.yaml file.
  2. The host list is inside the inventory directory. You can have more than one inventory file or scripts to generate the hostlist, or a combination of both.
  3. The roles/oracle group the tasks. We only have one role called 'oracle' because that's the cloud provider I'm focusing on here.
  4. Our playbook uses metadata in the form of variables, with each one defined on the 'vars' directory. That way we can customize the behaviour of the playbook in multiple places:
---
# Common variables for my Oracle Cloud environments
controller_host: XXXX.com
ssl_maintainer_email: YYYYYY@ZZZZ.com
architecture: arm64
prometheus_version: 2.38.0
prometheus_port: 9090
prometheus_node_exporter_nodes: "['X-server1:{{ node_exporter_port }}', 'Y-server2:{{ node_exporter_port }}' ]"
node_exporter_version: 1.4.0
node_exporter_port: 9100
internal_network: QQ.0.0.0/24

The roles/oracle files directory contains files that can be copied as is to the remote directory. The templates' directory is similar, but the files in there can be customized for each host by using the Jinja templating language.

# A template for the prometheus scraper configuration file
---
global:
    scrape_interval: 30s
    evaluation_interval: 30s
    scrape_timeout: 10s
    external_labels:
        monitor: 'oracle-cloud-metrics'

scrape_configs:
  - job_name: 'node-exporter'
    static_configs:
      - targets: {{ prometheus_node_exporter_nodes }}
    tls_config:
      insecure_skip_verify: true

The 'tasks' directory is where we store our tasks, that is the actions that will modify the server state. Note that Ansible will not execute tasks if it's not necessary. The idea is that you can re-run a playbook as many times as needed and the final state will be the same.

# Fragment of the nginx tasks file. See how we notify a handler to restart nginx after the SSL certificate is renewed.
---
- name: Copy requirements file
  ansible.builtin.copy:
    src: requirements_certboot.txt
    dest: /opt/requirements_certboot.txt
  tags: certbot_requirements

- name: Setup Certbot
  pip:
    requirements: /opt/requirements_certboot.txt
    virtualenv: /opt/certbot/
    virtualenv_site_packages: true
    virtualenv_command: /usr/bin/python3 -m venv
  tags: certbot_env

- name: Get SSL certificate
  command:
    argv:
      - /opt/certbot/bin/certbot
      - --nginx
      - --agree-tos
      - -m {{ ssl_maintainer_email }}
      - -d {{ inventory_hostname }}
      - --non-interactive
  notify:
    - Restart Nginx
  tags: certbot_install

There is one special directory called 'handlers'. There we define actions that must happen if a task changes the state of our host.

We now have a picture of how all the pieces work together, so let's talk about some specific details.

Firewall provisioning

With Ansible, you can replace a sequence of commands like this:

sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload

With a firewalld module:

---
- name: Enable HTTP at the Linux firewall
  firewalld:
    zone: public
    service: http
    permanent: true
    state: enabled
    immediate: yes
  notify:
    - Reload firewall
  tags: firewalld_https

- name: Enable HTTPS at the Linux firewall
  firewalld:
    zone: public
    service: https
    permanent: true
    state: enabled
    immediate: yes
  notify:
    - Reload firewall
  tags: firewalld_https

Common tasks have nice replacements

So instead of running SUDO with a privileged command:

sudo dnf install -y nginx
sudo systemctl enable nginx.service --now

You can have something like this:

# oracle.yaml file, which tells which roles to call, included from site.yaml
---
- hosts: oracle
  serial: 2
  remote_user: opc
  become: true
  become_user: root
  roles:
  - oracle
# NGINX task (roles/oracle/tasks/nginx.yaml)
- name: Ensure nginx is at the latest version
  dnf:
    name: nginx >= 1.14.1
    state: present
    update_cache: true
  tags: install_nginx
# And a handler that will restart NGINX after it gets modified (handlers/main.yaml)
---
- name: Restart Nginx
  service:
    name: nginx
    state: restarted
- name: Reload firewall
  command: firewall-cmd --reload

How to Run the Playbooks

Normally you don't wait to have the whole playbook written, but you run the pieces you need in the proper order. At some point you will have your whole playbook finished and ready to go.

Make sure the playbook behaves properly with --check before making any changes

The very first step is to check your playbook file for errors. For that you can use yamllint:

yamllint roles/oracle/tasks/main.yaml

But doing this for every yaml file in your playbook can be tedious an error-prone. As an alternative, you can run the playbook in a 'dry-run' mode, to see what will happen without actually making any changes:

asciicast

Another way to gradually test a complex playbook is by executing a specific task by using a tag or group of tags. That way you can do controlled execution of your playbook:

Keep in mind that this will not execute any dependencies that you may have defined on you playbook, tough:

asciicast

Constrain where the playbook runs with --limit and --tags

Say that you are only interested in running your playbook on a certain host. In that case, you can also do that by using the --limit flag:

ansible-playbook --inventory inventory --limit fido.yourcompany.com --tags certbot_renew site.yaml

asciicast

Here we did run only a task tagged certbot_renew on the host fido.yourcompany.com.

How to deal with a real issue

Let's make this interesting: say that I was eager to update one of my requirements for certboot, and I changed versions if pip to '22.3.1':

pip==22.3.1
wheel==0.38.4
certbot==1.32.0
certbot-nginx==1.32.0

When I run the playbook we have a failure:

asciicast

This is an issue with the versions if specified on the requirements_certboot.txt file. When you install a Python library using a virtual environment you can specify versions like this:

pip22.3.1
wheel0.38.1
certbot1.23.0
certbot-nginx1.23.0

To fix the issue, we will revert the versions used on the file and then re-run the requirements file and Certbot installation task:

- name: Setup Certbot
  pip:
    requirements: /opt/requirements_certboot.txt
    virtualenv: /opt/certbot/
    virtualenv_site_packages: true
    virtualenv_command: /usr/bin/python3 -m venv
    state: forcereinstall
  tags: certbot_env
ansible-playbook --inventory inventory --tags certbot_env site.yaml

See it in action:

asciicast

How to run the whole playbook

ansible-playbook --inventory inventory site.yaml

It is time to run the whole playbook:

asciicast

Wrapping up

This tutorial only touches the surface of what you can do with Ansible. So below are a few more resources you should explore to learn more:

Original article source at https://www.freecodecamp.org

#cloud #oracle #ansible #prometheus

Home Lab with Oracle Cloud and Ansible
Fabiola  Auma

Fabiola Auma

1668014640

Ansible Role Homebrew: Installs Homebrew on MacOS

Ansible Role: Homebrew (MOVED)

MOVED: This role has been moved into the geerlingguy.mac collection. Please see this issue for a migration guide and more information.

Installs Homebrew on MacOS, and configures packages, taps, and cask apps according to supplied variables.

Requirements

None.

Role Variables

Available variables are listed below, along with default values (see defaults/main.yml):

homebrew_repo: https://github.com/Homebrew/brew

The GitHub repository for Homebrew core.

homebrew_prefix: "{{ (ansible_machine == 'arm64') | ternary('/opt/homebrew', '/usr/local') }}"
homebrew_install_path: "{{ homebrew_prefix }}/Homebrew"

The path where Homebrew will be installed (homebrew_prefix is the parent directory). It is recommended you stick to the default, otherwise Homebrew might have some weird issues. If you change this variable, you should also manually create a symlink back to /usr/local so things work as Homebrew expects.

homebrew_brew_bin_path: /usr/local/bin

The path where brew will be installed.

homebrew_installed_packages:
  - ssh-copy-id
  - pv
  - { name: vim, install_options: "with-luajit,override-system-vi" }

Packages you would like to make sure are installed via brew install. You can optionally add flags to the install by setting an install_options property, and if used, you need to explicitly set the name for the package as well. By default, no packages are installed (homebrew_installed_packages: []).

homebrew_uninstalled_packages: []

Packages you would like to make sure are uninstalled.

homebrew_upgrade_all_packages: false

Whether to upgrade homebrew and all packages installed by homebrew. If you prefer to manually update packages via brew commands, leave this set to false.

homebrew_taps:
  - homebrew/core
  - { name: my_company/internal_tap, url: 'https://example.com/path/to/tap.git' }

Taps you would like to make sure Homebrew has tapped.

homebrew_cask_apps:
  - firefox
  - { name: virtualbox, install_options:"debug,appdir=/Applications" }

Apps you would like to have installed via cask. Search for popular apps to see if they're available for install via Cask. Cask will not be used if it is not included in the list of taps in the homebrew_taps variable. You can optionally add flags to the install by setting an install_options property, and if used, you need to explicitly set the name for the package as well. By default, no Cask apps will be installed (homebrew_cask_apps: []).

homebrew_cask_accept_external_apps: true

Default value is false and would result in interruption of further processing of the whole role (and ansible play) in case any app given in homebrew_cask_apps is already installed without cask. Good for a tightly managed system.

Specify as true instead if you prefer to silently continue if any App is already installed without cask. Generally good for a system that is managed with cask / Ansible as well as other install methods (like manually) at the same time.

homebrew_cask_uninstalled_apps:
  - google-chrome

Apps you would like to make sure are uninstalled.

homebrew_cask_appdir: /Applications

Directory where applications installed via cask should be installed.

homebrew_use_brewfile: true

Whether to install via a Brewfile. If so, you will need to install the homebrew/bundle tap, which could be done within homebrew_taps.

homebrew_brewfile_dir: '~'

The directory where your Brewfile is located.

homebrew_clear_cache: false

Set to true to remove the Hombrew cache after any new software is installed.

homebrew_user: "{{ ansible_user_id }}"

The user that you would like to install Homebrew as.

homebrew_group: "{{ ansible_user_gid }}"

The group that you would like to use while installing Homebrew.

homebrew_folders_additional: []

Any additional folders inside homebrew_prefix for which to ensure homebrew user/group ownership.

Dependencies

Example Playbook

- hosts: localhost
  vars:
    homebrew_installed_packages:
      - mysql
  roles:
    - geerlingguy.homebrew

See the tests/local-testing directory for an example of running this role over Ansible's local connection. See also: Mac Development Ansible Playbook.


Download Details:

Author: geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-homebrew

License: MIT license

#ansible 

Ansible Role Homebrew: Installs Homebrew on MacOS
Fabiola  Auma

Fabiola Auma

1668006960

Jupyterhub Deploy Teaching: Deploy JupyterHub for Teaching

Deploy JupyterHub for teaching

The goal of this repository is to produce a reference deployment of JupyterHub for teaching with nbgrader.

The repository started from this deployment of JupyterHub for "Introduction to Data Science" at Cal Poly. It is designed to be a simple and reusable JupyterHub deployment, while following best practices.

The main use case targeted is small to medium groups of trusted users working on a single server.

Design goal of this reference deployment

Create a JupyterHub teaching reference deployment that is simple yet functional:

  • Use a single server.
  • Use Nginx as a frontend proxy, serving static assets, and a termination point for SSL/TLS.
  • Configure using Ansible scripts.
  • Use (optionally) https://letsencrypt.org/ for generating SSL certificates.
  • Does not use Docker or containers

Prerequisites

To deploy this JupyterHub reference deployment, you should have:

  • An empty Ubuntu server running the latest stable release
  • Local drives to be mounted
  • A formatted and mounted directory to store user home directories
  • A valid DNS name
  • SSL certificate
  • Ansible 2.1+ installed for JupyterHub configuration (pip install "ansible>=2.1")
    • Verified Ansible 2.2.1.0 works with Ubuntu 16.04 and Python3

For administration of the server, you should also:

  • Specify the admin users of JupyterHub.
  • Allow SSH key based access to server and add the public SSH keys of GitHub users who need to be able to SSH to the server as root for administration.

For managing users and services on the server, you will:

  • Create "Trusted" users on the system, meaning that you would give them a user-level shell account on the server
  • Authenticate and manage users with either:
    • Regular Unix users and PAM.
    • GitHub OAuth
  • Manage the running of jupyterhub and nbgrader using supervisor.
  • Monitor the state of the server (optional feature) using NewRelic or your cloud provider.

Installation

Follow the detailed instructions in the Installation Guide.

The basic steps are:

  • Create the hosts group with Fully Qualified Domain Names (FQDNs) of the hosts
  • Secure your deployment with SSL
  • Deploy with Ansible ansible-playbook -i hosts deploy.yml
  • Verify your deployment and reboot the Hub supervisorctl reload

Configuring nbgrader

The nbgrader package is installed when JupyterHub is installed using the steps in the Installation Guide.

View the documentation for detailed configuration steps. The basic steps to configure formgrade or nbgrader's notebook extensions are:

  • activate the extension nbgrader extension activate
  • log into JupyterHub
  • run ansible script ansible-playbook -i hosts deploy_formgrade.yml
  • SSH into JupyterHub
  • reboot the Hub and nbgrader supervisorctl reload

Using nbgrader

With this reference deployment, instructors can start to use nbgrader. The Using nbgrader section of the reference deployment documentation gives brief instructions about creating course assignments, releasing them to students, and grading student submissions.

For full details about nbgrader and its features, see the nbgrader documentation.

Notes

Ansible configuration and deployment

Change the ansible configuration by editing ./ansible_cfg.

To limit the deployment to certain hosts, add -l hostname to the Ansible deploy commands:

ansible-playbook -i hosts -l hostname deploy.yml

Authentication

If you are not using GitHub OAuth, you will need to manually create users using adduser: adduser --gecos "" username.

Logs

The logs for jupyterhub are in /var/log/jupyterhub.

The logs for nbgrader are in /var/log/nbgrader.

Starting, stopping, and restarting the Hub

To manage the jupyterhub and nbgrader services by SSH to the server and run: supervisorctl jupyterhub [start|stop|restart]


Download Details:

Author: jupyterhub
Source Code: https://github.com/jupyterhub/jupyterhub-deploy-teaching

License: BSD-3-Clause license

#JupyterHub #ansible 

Jupyterhub Deploy Teaching: Deploy JupyterHub for Teaching
Fabiola  Auma

Fabiola Auma

1667999460

Ansible Icinga2: Ansible Role for Icinga 2

Icinga 2 Role for Ansible

Ansible role to install and configure Icinga 2.

Attention: This role is under heavy development.

The scope of this role is to handle the installation and configuration of Icinga 2. In the future it will be possible to configure Icinga as master, satellite or agent. This role handles only Icinga 2 and not any third-party software (like databases, nrpe, UI etc.). The installation and configuration of Icinga Web 2 is currently not part of this role.

The role is supported on the following platforms:

  • Ansible >= v2.8
  • Icinga 2 >= v2.8
  • Ubuntu: 18.04, 20.04
  • Debian: 9, 10
  • CentOS/RHEL: 7, 8

Other operating systems or versions may work but have not been tested.
Platform support may be extended after a v1.0 release.

Installation

Using ansible-galaxy:

ansible-galaxy install icinga.icinga2

Using requirements.yml:

---

- src: icinga.icinga2

Requirements

Prerequisites that you may need, but are not covered by this role:

  • Database (MySQL/MariaDB/Postgres)
  • Web UI (icingaweb2)
  • NRPE

Role Configuration

By default this role adds the official Icinga Repository to the system and installs the icinga2 package.

- name: Default Example
  hosts: localhost
  roles:
    - icinga2

Disable repository management

You may choose to use your own or the systems default repositories. Repository management can be disabled:

- name: Example without repository
  hosts: all
  roles:
    - icinga2
  vars:
    - i2_manage_repository: false

Variables

Variable: i2_manage_repository

Whether to add the official Icinga Repository to the system or not. Defaults to true.

Variable: i2_manage_package

Whether to install packages or not. Defaults to true.

Variable: i2_manage_epel

Whether to install the EPEL release package. Defaults to true.

Variable: i2_manage_service

Whether to start, restart and reload the Icinga 2 on changes or not. Defaults to true.

Variable: i2_apt_key

GPG key used to verify packages on APT based system. The key will be imported. Defaults to https://packages.icinga.com/icinga.key.

Variable: i2_apt_url

Repository URL for APT based systems. Defaults to deb http://packages.icinga.com/{{ ansible_distribution|lower }} icinga-{{ ansible_distribution_release }} main. This may be customized if you have a local mirror.

Variable: i2_yum_key

GPG key used to verify packages on YUM based sytems. The key URL will be added to the repository file. Defaults to https://packages.icinga.com/icinga.key.

Variable: i2_yum_url

Repository URL for YUM based sytem. Defaults to http://packages.icinga.com/epel/$releasever/release/. This may be customized if you have a local mirror.

Variable: i2_confd

By default, configuration located in /etc/icinga2/conf.d is included. This list may be modified to include additional directories or set to [] to not include conf.d at all (e.g. on distributed installations). Defaults to [ "conf.d" ].

Variable: i2_include_plugins

The ITL comes with a set of pre-configured check commands. This variable defines what to include. Defaults to ["itl", "plugins", "plugins-contrib", "manubulon", "windows-plugins", "nscp"]

Variable: i2_const_plugindir

Set PluginDir constant. Defaults to {{ i2_lib_dir }}/nagios/plugins.

Variable: i2_const_manubulonplugindir

Set ManubulonPluginDir constant. Defaults to {{ i2_lib_dir }}/nagios/plugins.

Variable: i2_const_plugincontribdir

Set PluginContribDir constant. Defualts to {{ i2_lib_dir }}/nagios/plugins.

Variable: i2_const_nodename

Set NodeName constant. Defaults to {{ ansible_fqdn }}.

Variable: i2_const_zonename

Set ZoneName constant. Defaults to {{ ansible_fqdn }}.

Variable: i2_const_ticketsalt

Set TicketSalt constant. Empty by default.

Variable: i2_custom_constants

Add custom constants to constants.conf. Must be a dictionary. Defaults to: {}

Some default required values are specified in i2_default_constants and merged with this variable. Use this variable to override these default values, or add your own constants.

Default values of i2_default_constants:

  PluginDir: "{{ i2_lib_dir }}/nagios/plugins"
  ManubulonPluginDir: "{{ i2_lib_dir }}/nagios/plugins"
  PluginContribDir: "{{ i2_lib_dir }}/nagios/plugins"
  NodeName: "{{ ansible_fqdn }}"
  ZoneName: "{{ ansible_fqdn }}"
  TicketSalt: ""

Example usage:

  vars:
    - i2_constants:
        TicketSalt: "My ticket salt"
        Foo: "bar"

Variable: i2_zones

Replaces zones.conf with configured zones.

Example:

  vars:
    i2_zones:
      - name: master
        is_parent: true
        endpoints:
          - name: master1.example.tom
            host: master1.example.tom
            port: 15667
          - name: master2.example.tom
            host: 128.0.0.1
          - name: global-templates
            is_global: true
          - name: director-global
            is_global: true

is_parent = sets the parent zone to this zonename (optional). is_global = sets the zone to a global zone (optional). host = sets the host ip/fqdn to connect to this endpoint (optional). port = sets the port (optional). Defaults to 5665. Requires host do be set.

System specific variables

The following variables are system specific and don't need to be overwritten in most cases. Be careful when making changes to any of these variables.

Variable: i2_conf_dir

Base Icinga 2 configuration directory. Defaults to /etc/icinga2.

Variable: i2_user

Icinga 2 running as user. Default depends on OS.

Variable: i2_group

Icinga 2 running as group. Default depends on OS.

Variable: i2_lib_dir

Lib dir. Default depends on OS.

Feature Usage

Variable: i2_custom_features

Features are maintained over the dictionary i2_custom_features. By default features won't be managed until i2_custom_features has further values.

Example usage:

vars:
  i2_custom_features:
    ApiListener:                #ObjectType
      api:                      #ObjectName
        accept_command: true    #ObjectAttribute
        accept_config: true     #ObjectAttribute
    GraphiteWriter:
      graphite:
        host: "127.0.0.1"
        port: "2004"

Variable: i2_remove_unmanaged_features

The variable i2_remove_unmanaged_features change the behaviour of the feature handling. It will remove all unmanged .conf files from the directory /etc/icinga2/features-enabled and let you manage only your defined features.

Handlers

Handler: start icinga2

This handler starts Icinga 2. It is only used to make sure Icinga 2 is running. You can prevent this handler from being triggered by setting i2_manage_service to false.

Handler: reload icinga2

This handler reloads Icinga 2 when configuration changes. You can prevent this handler from being triggered by setting i2_manage_service to false.

Examples

Example Agent Config:

Example usage (api featuer will NOT be enabled in this example):

- name: icinga Package
  hosts: icingaagents
  roles:
    - icinga2
  vars:
    i2_confd: [] #don't include conf.d
    i2_zones:
      - name: master
        is_parent: true
        endpoints:
          - name: master1.example.tom
            host: master1.example.tom
            port: 15667
          - name: master2.example.tom
            host: 128.0.0.1

Dependencies

None

Example Playbook

---

- name: Playbook
  hosts: all

  roles:
    - icinga.icinga2

Contributing

When contributing several steps such as pull requests and proper testing implementations are required. Find a detailed step by step guide in CONTRIBUTING.md.

Testing

Testing is essential in our workflow to ensure a good quality. We use Molecule to test all components of this role. For a detailed description see TESTING.md.

Release Notes

When releasing new versions we refer to SemVer 1.0.0 for version numbers. All steps required when creating a new release are described in RELEASE.md

See also CHANGELOG.md

Authors

AUTHORS is generated on each release.

License

This project is under the Apache License. See the LICENSE file for the full license text.


Download Details:

Author:  Icinga
Source Code: https://github.com/Icinga/ansible-icinga2

License: Apache-2.0 license

#ansible 

Ansible Icinga2: Ansible Role for Icinga 2
Fabiola  Auma

Fabiola Auma

1667991960

Ansible Playbooks: About Icinga2 Ansible Roles

About Icinga2 Ansible Roles

What is an Ansible role?

Ansible roles are pre-packaged units of automation. Once downloaded, roles can be dropped into Ansible PlayBooks and immediately applied to servers. For details checks doc/about.md.

Documentation

The documentation is located in the doc/ directory. This documentation is, as the Icinga2-ansible, currently under development and the various information provided may be subject to changes in the near future.

Support

Check the project website at https://www.icinga.com/ for status updates and https://icinga.com/support/ if you want to contact us.

Important

This project is no longer fully maintained.

We are focusing on the new "Official" Ansible playbook https://github.com/Icinga/ansible-icinga2 there is plenty of work a head, and we encourage contributions.


Download Details:

Author: Icinga
Source Code: https://github.com/Icinga/ansible-playbooks

License: GPL-2.0 license

#ansible 

Ansible Playbooks: About Icinga2 Ansible Roles
Fabiola  Auma

Fabiola Auma

1667980740

Docker Ansible: Ansible inside Docker Containers

Ansible

Ansible inside Docker for consistent running of ansible inside your local machine or CI/CD system. You can view CHANGELOG to understand what changes have happened to this recently.

Current Ansible Versions

These are the latest Ansible versions running within the containers:

  • Ansible 2.9: 2.9.27
  • Ansible 2.10: 2.10.17
  • Ansible 2.11: 2.11.12
  • Ansible 2.12: 2.12.7
  • Ansible 2.13: 2.13.2

Supported tags and respective Dockerfile links

All installs include Mitogen mainly due to the performance improvements that Mitogen awards you. You can read more about it inside the Mitogen for Ansible documentation.

Immutable Images

There are a number of immutable images that are also being collected. To find a specific version of Ansible, look within the Docker Hub Tags. Each of the containers follow a similar pattern: Ansible-version-Base OS version.

Ansible 2.13

This includes ansible-core + ansible.

Ansible 2.12

This includes ansible-core + ansible.

Ansible 2.11

This includes ansible-core + ansible.

Ansible 2.10

This includes ansible-base.

Ansible 2.9

This runs the ansible package.

Using Mitogen

To leverage *Mitogen- to accelerate your playbook runs, add this to your ansible.cfg:

Please investigate in your container the location of ansible_mitogen (it is different per container). You can do this via:

your_container="ansible:latest"
docker run --rm -it "willhallonline/${your_container}" /bin/sh -c "find / -type d | grep "ansible_mitogen/plugins" | sort | head -n 1"

and then configuring your own ansible.cfg like:

[defaults]
strategy_plugins = /usr/local/lib/python3.7/site-packages/ansible_mitogen/plugins/
strategy = mitogen_linear

Running

**You will likely need to mount required directories into your container to make it run (or build on top of what is here).

Simple

$~   docker run --rm -it willhallonline/ansible:latest /bin/sh

Mount local directory and ssh key

$~  docker run --rm -it -v $(pwd):/ansible -v ~/.ssh/id_rsa:/root/id_rsa willhallonline/ansible:latest /bin/sh

Injecting commands

$~  docker run --rm -it -v $(pwd):/ansible -v ~/.ssh/id_rsa:/root/id_rsa willhallonline/ansible:latest ansible-playbook playbook.yml

Bash Alias

You can put these inside your dotfiles (~/.bashrc or ~/.zshrc to make handy aliases).

alias docker-ansible-cli='docker run --rm-it -v $(pwd):/ansible -v ~/.ssh/id_rsa:/root/.ssh/id_rsa --workdir=/ansible willhallonline/ansible:latest /bin/sh'
alias docker-ansible-cmd='docker run --rm -it -v $(pwd):/ansible -v ~/.ssh/id_rsa:/root/.ssh/id_rsa --workdir=/ansible willhallonline/ansible:latest '

use with:

$~  docker-ansible-cli ansible-playbook -u playbook.yml

Maintainer


Download Details:

Author: willhallonline
Source Code: https://github.com/willhallonline/docker-ansible

License: MIT license

#ansible 

Docker Ansible: Ansible inside Docker Containers
Fabiola  Auma

Fabiola Auma

1667973300

Community.vmware: Ansible Collection for VMware

Ansible Collection: community.vmware

This repo hosts the community.vmware Ansible Collection.

The collection includes the VMware modules and plugins supported by Ansible VMware community to help the management of VMware infrastructure.

Releases and maintenance

ReleaseStatusExpected end of life
3MaintainedNov 2024
2Maintained (bug fixes only)Nov 2023
1UnmaintainedNov 2022

Ansible version compatibility

This collection has been tested against following Ansible versions: >=2.13.0.

For collections that support Ansible 2.9, please ensure you update your network_os to use the fully qualified collection name (for example, cisco.ios.ios). Plugins and modules within a collection may be tested with only specific Ansible versions. A collection may contain metadata that identifies these versions. PEP440 is the schema used to describe the versions of Ansible.

Installation and Usage

Installing the Collection from Ansible Galaxy

Before using the VMware community collection, you need to install the collection with the ansible-galaxy CLI:

ansible-galaxy collection install community.vmware

You can also include it in a requirements.yml file and install it via ansible-galaxy collection install -r requirements.yml using the format:

collections:
- name: community.vmware

Required Python libraries

VMware community collection depends on Python 3.8+ and on following third party libraries:

Installing required libraries and SDK

Installing collection does not install any required third party Python libraries or SDKs. You need to install the required Python libraries using following command:

pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt

If you are working on developing and/or testing VMware community collection, you may want to install additional requirements using following command:

pip install -r ~/.ansible/collections/ansible_collections/community/vmware/test-requirements.txt

Included content

Connection plugins

NameDescription
community.vmware.vmware_toolsExecute tasks inside a VM via VMware Tools

Httpapi plugins

NameDescription
community.vmware.vmwareHttpApi Plugin for VMware REST API

Inventory plugins

NameDescription
community.vmware.vmware_host_inventoryVMware ESXi hostsystem inventory source
community.vmware.vmware_vm_inventoryVMware Guest inventory source

Modules

NameDescription
community.vmware.vcenter_domain_user_group_infoGather user or group information of a domain
community.vmware.vcenter_extensionRegister/deregister vCenter Extensions
community.vmware.vcenter_extension_infoGather info vCenter extensions
community.vmware.vcenter_folderManage folders on given datacenter
community.vmware.vcenter_licenseManage VMware vCenter license keys
community.vmware.vcenter_standard_key_providerAdd, reconfigure or remove Standard Key Provider on vCenter server
community.vmware.vmware_about_infoProvides information about VMware server to which user is connecting to
community.vmware.vmware_categoryManage VMware categories
community.vmware.vmware_category_infoGather info about VMware tag categories
community.vmware.vmware_cfg_backupBackup / Restore / Reset ESXi host configuration
community.vmware.vmware_clusterManage VMware vSphere clusters
community.vmware.vmware_cluster_dpmManage Distributed Power Management (DPM) on VMware vSphere clusters
community.vmware.vmware_cluster_drsManage Distributed Resource Scheduler (DRS) on VMware vSphere clusters
community.vmware.vmware_cluster_haManage High Availability (HA) on VMware vSphere clusters
community.vmware.vmware_cluster_infoGather info about clusters available in given vCenter
community.vmware.vmware_cluster_vclsOverride the default vCLS (vSphere Cluster Services) VM disk placement for this cluster.
community.vmware.vmware_cluster_vsanManages virtual storage area network (vSAN) configuration on VMware vSphere clusters
community.vmware.vmware_content_deploy_ovf_templateDeploy Virtual Machine from ovf template stored in content library.
community.vmware.vmware_content_deploy_templateDeploy Virtual Machine from template stored in content library.
community.vmware.vmware_content_library_infoGather information about VMWare Content Library
community.vmware.vmware_content_library_managerCreate, update and delete VMware content library
community.vmware.vmware_datacenterManage VMware vSphere Datacenters
community.vmware.vmware_datacenter_infoGather information about VMware vSphere Datacenters
community.vmware.vmware_datastoreConfigure Datastores
community.vmware.vmware_datastore_clusterManage VMware vSphere datastore clusters
community.vmware.vmware_datastore_cluster_managerManage VMware vSphere datastore cluster's members
community.vmware.vmware_datastore_infoGather info about datastores available in given vCenter
community.vmware.vmware_datastore_maintenancemodePlace a datastore into maintenance mode
community.vmware.vmware_deploy_ovfDeploys a VMware virtual machine from an OVF or OVA file
community.vmware.vmware_drs_groupCreates vm/host group in a given cluster.
community.vmware.vmware_drs_group_infoGathers info about DRS VM/Host groups on the given cluster
community.vmware.vmware_drs_group_managerManage VMs and Hosts in DRS group.
community.vmware.vmware_drs_rule_infoGathers info about DRS rule on the given cluster
community.vmware.vmware_dvs_hostAdd or remove a host from distributed virtual switch
community.vmware.vmware_dvs_portgroupCreate or remove a Distributed vSwitch portgroup.
community.vmware.vmware_dvs_portgroup_findFind portgroup(s) in a VMware environment
community.vmware.vmware_dvs_portgroup_infoGathers info DVS portgroup configurations
community.vmware.vmware_dvswitchCreate or remove a Distributed Switch
community.vmware.vmware_dvswitch_infoGathers info dvswitch configurations
community.vmware.vmware_dvswitch_lacpManage LACP configuration on a Distributed Switch
community.vmware.vmware_dvswitch_niocManage distributed switch Network IO Control
community.vmware.vmware_dvswitch_pvlansManage Private VLAN configuration of a Distributed Switch
community.vmware.vmware_dvswitch_uplink_pgManage uplink portproup configuration of a Distributed Switch
community.vmware.vmware_evc_modeEnable/Disable EVC mode on vCenter
community.vmware.vmware_export_ovfExports a VMware virtual machine to an OVF file, device files and a manifest file
community.vmware.vmware_first_class_diskManage VMware vSphere First Class Disks
community.vmware.vmware_folder_infoProvides information about folders in a datacenter
community.vmware.vmware_guestManages virtual machines in vCenter
community.vmware.vmware_guest_boot_infoGather info about boot options for the given virtual machine
community.vmware.vmware_guest_boot_managerManage boot options for the given virtual machine
community.vmware.vmware_guest_controllerManage disk or USB controllers related to virtual machine in given vCenter infrastructure
community.vmware.vmware_guest_cross_vc_cloneCross-vCenter VM/template clone
community.vmware.vmware_guest_custom_attribute_defsManage custom attributes definitions for virtual machine from VMware
community.vmware.vmware_guest_custom_attributesManage custom attributes from VMware for the given virtual machine
community.vmware.vmware_guest_customization_infoGather info about VM customization specifications
community.vmware.vmware_guest_diskManage disks related to virtual machine in given vCenter infrastructure
community.vmware.vmware_guest_disk_infoGather info about disks of given virtual machine
community.vmware.vmware_guest_file_operationFiles operation in a VMware guest operating system without network
community.vmware.vmware_guest_findFind the folder path(s) for a virtual machine by name or UUID
community.vmware.vmware_guest_infoGather info about a single VM
community.vmware.vmware_guest_instant_cloneInstant Clone VM
community.vmware.vmware_guest_moveMoves virtual machines in vCenter
community.vmware.vmware_guest_networkManage network adapters of specified virtual machine in given vCenter infrastructure
community.vmware.vmware_guest_powerstateManages power states of virtual machines in vCenter
community.vmware.vmware_guest_register_operationVM inventory registration operation
community.vmware.vmware_guest_screenshotCreate a screenshot of the Virtual Machine console.
community.vmware.vmware_guest_sendkeySend USB HID codes to the Virtual Machine's keyboard.
community.vmware.vmware_guest_serial_portManage serial ports on an existing VM
community.vmware.vmware_guest_snapshotManages virtual machines snapshots in vCenter
community.vmware.vmware_guest_snapshot_infoGather info about virtual machine's snapshots in vCenter
community.vmware.vmware_guest_storage_policySet VM Home and disk(s) storage policy profiles.
community.vmware.vmware_guest_tools_infoGather info about VMware tools installed in VM
community.vmware.vmware_guest_tools_upgradeModule to upgrade VMTools
community.vmware.vmware_guest_tools_waitWait for VMware tools to become available
community.vmware.vmware_guest_tpmAdd or remove vTPM device for specified VM.
community.vmware.vmware_guest_vgpuModify vGPU video card profile of the specified virtual machine in the given vCenter infrastructure
community.vmware.vmware_guest_videoModify video card configurations of specified virtual machine in given vCenter infrastructure
community.vmware.vmware_hostAdd, remove, or move an ESXi host to, from, or within vCenter
community.vmware.vmware_host_acceptanceManage the host acceptance level of an ESXi host
community.vmware.vmware_host_active_directoryJoins an ESXi host system to an Active Directory domain or leaves it
community.vmware.vmware_host_auto_startManage the auto power ON or OFF for vm on ESXi host
community.vmware.vmware_host_capability_infoGathers info about an ESXi host's capability information
community.vmware.vmware_host_config_infoGathers info about an ESXi host's advance configuration information
community.vmware.vmware_host_config_managerManage advanced system settings of an ESXi host
community.vmware.vmware_host_custom_attributesManage custom attributes from VMware for the given ESXi host
community.vmware.vmware_host_datastoreManage a datastore on ESXi host
community.vmware.vmware_host_disk_infoGathers information about disks attached to given ESXi host/s.
community.vmware.vmware_host_dnsManage DNS configuration of an ESXi host system
community.vmware.vmware_host_dns_infoGathers info about an ESXi host's DNS configuration information
community.vmware.vmware_host_factsGathers facts about remote ESXi hostsystem
community.vmware.vmware_host_feature_infoGathers info about an ESXi host's feature capability information
community.vmware.vmware_host_firewall_infoGathers info about an ESXi host's firewall configuration information
community.vmware.vmware_host_firewall_managerManage firewall configurations about an ESXi host
community.vmware.vmware_host_hyperthreadingEnables/Disables Hyperthreading optimization for an ESXi host system
community.vmware.vmware_host_ipv6Enables/Disables IPv6 support for an ESXi host system
community.vmware.vmware_host_iscsiManage the iSCSI configuration of ESXi host
community.vmware.vmware_host_iscsi_infoGather iSCSI configuration information of ESXi host
community.vmware.vmware_host_kernel_managerManage kernel module options on ESXi hosts
community.vmware.vmware_host_lockdownManage administrator permission for the local administrative account for the ESXi host
community.vmware.vmware_host_lockdown_exceptionsManage Lockdown Mode Exception Users
community.vmware.vmware_host_logbundleFetch logbundle file from ESXi
community.vmware.vmware_host_logbundle_infoGathers manifest info for logbundle
community.vmware.vmware_host_ntpManage NTP server configuration of an ESXi host
community.vmware.vmware_host_ntp_infoGathers info about NTP configuration on an ESXi host
community.vmware.vmware_host_package_infoGathers info about available packages on an ESXi host
community.vmware.vmware_host_passthroughManage PCI device passthrough settings on host
community.vmware.vmware_host_powermgmt_policyManages the Power Management Policy of an ESXI host system
community.vmware.vmware_host_powerstateManages power states of host systems in vCenter
community.vmware.vmware_host_scanhbaRescan host HBA's and optionally refresh the storage system
community.vmware.vmware_host_scsidisk_infoGather information about SCSI disk attached to the given ESXi
community.vmware.vmware_host_service_infoGathers info about an ESXi host's services
community.vmware.vmware_host_service_managerManage services on a given ESXi host
community.vmware.vmware_host_snmpConfigures SNMP on an ESXi host system
community.vmware.vmware_host_sriovManage SR-IOV settings on host
community.vmware.vmware_host_ssl_infoGather info of ESXi host system about SSL
community.vmware.vmware_host_tcpip_stacksManage the TCP/IP Stacks configuration of ESXi host
community.vmware.vmware_host_user_managerManage users of ESXi
community.vmware.vmware_host_vmhba_infoGathers info about vmhbas available on the given ESXi host
community.vmware.vmware_host_vmnic_infoGathers info about vmnics available on the given ESXi host
community.vmware.vmware_local_role_infoGather info about local roles on an ESXi host
community.vmware.vmware_local_role_managerManage local roles on an ESXi host
community.vmware.vmware_local_user_infoGather info about users on the given ESXi host
community.vmware.vmware_local_user_managerManage local users on an ESXi host
community.vmware.vmware_maintenancemodePlace a host into maintenance mode
community.vmware.vmware_migrate_vmkMigrate a VMK interface from VSS to VDS
community.vmware.vmware_object_custom_attributes_infoGather custom attributes of an object
community.vmware.vmware_object_renameRenames VMware objects
community.vmware.vmware_object_role_permissionManage local roles on an ESXi host
community.vmware.vmware_object_role_permission_infoGather information about object's permissions
community.vmware.vmware_portgroupCreate a VMware portgroup
community.vmware.vmware_portgroup_infoGathers info about an ESXi host's Port Group configuration
community.vmware.vmware_recommended_datastoreReturns the recommended datastore from a SDRS-enabled datastore cluster
community.vmware.vmware_resource_poolAdd/remove resource pools to/from vCenter
community.vmware.vmware_resource_pool_infoGathers info about resource pool information
community.vmware.vmware_tagManage VMware tags
community.vmware.vmware_tag_infoManage VMware tag info
community.vmware.vmware_tag_managerManage association of VMware tags with VMware objects
community.vmware.vmware_target_canonical_infoReturn canonical (NAA) from an ESXi host system
community.vmware.vmware_vc_infraprofile_infoList and Export VMware vCenter infra profile configs.
community.vmware.vmware_vcenter_settingsConfigures general settings on a vCenter server
community.vmware.vmware_vcenter_settings_infoGather info vCenter settings
community.vmware.vmware_vcenter_statisticsConfigures statistics on a vCenter server
community.vmware.vmware_vm_config_optionReturn supported guest ID list and VM recommended config option for specific guest OS
community.vmware.vmware_vm_host_drs_ruleCreates vm/host group in a given cluster
community.vmware.vmware_vm_infoReturn basic info pertaining to a VMware machine guest
community.vmware.vmware_vm_shellRun commands in a VMware guest operating system
community.vmware.vmware_vm_storage_policyCreate vSphere storage policies
community.vmware.vmware_vm_storage_policy_infoGather information about vSphere storage profile defined storage policy information.
community.vmware.vmware_vm_vm_drs_ruleConfigure VMware DRS Affinity rule for virtual machines in the given cluster
community.vmware.vmware_vm_vss_dvs_migrateMigrates a virtual machine from a standard vswitch to distributed
community.vmware.vmware_vmkernelManages a VMware VMkernel Adapter of an ESXi host.
community.vmware.vmware_vmkernel_infoGathers VMKernel info about an ESXi host
community.vmware.vmware_vmotionMove a virtual machine using vMotion, and/or its vmdks using storage vMotion.
community.vmware.vmware_vsan_clusterConfigure VSAN clustering on an ESXi host
community.vmware.vmware_vsan_health_infoGather information about a VMware vSAN cluster's health
community.vmware.vmware_vspan_sessionCreate or remove a Port Mirroring session.
community.vmware.vmware_vswitchManage a VMware Standard Switch to an ESXi host.
community.vmware.vmware_vswitch_infoGathers info about an ESXi host's vswitch configurations
community.vmware.vsphere_copyCopy a file to a VMware datastore
community.vmware.vsphere_fileManage files on a vCenter datastore

Testing and Development

If you want to develop new content for this collection or improve what is already here, the easiest way to work on the collection is to clone it into one of the configured COLLECTIONS_PATHS, and work on it there.

Testing with ansible-test

Refer testing for more information.

Updating documentation

ansible-playbook tools/update_documentation.yml

Publishing New Version

Assuming your (local) repository has set origin to your GitHub fork and this repository is added as upstream:

Prepare the release:

  • Make sure your fork is up to date: git checkout main && git pull && git fetch upstream && git merge upstream/main.
  • Run ansible-playbook tools/prepare_release.yml. The playbook tries to generate the next minor release automatically, but you can also set the version explicitly with --extra-vars "version=$VERSION". You will have to set the version explicitly when publishing a new major release.
  • Push the created release branch to your GitHub repo (git push --set-upstream origin prepare_$VERSION_release) and open a PR for review.

Push the release:

  • After the PR has been merged, make sure your fork is up to date: git checkout main && git pull && git fetch upstream && git merge upstream/main.
  • Tag the release: git tag -s $VERSION
  • Push the tag: git push upstream $VERSION

Revert the version in galaxy.yml back to null:

  • Make sure your fork is up to date: git checkout main && git pull && git fetch upstream && git merge upstream/main.
  • Run ansible-playbook tools/unset_version.yml.
  • Push the created branch to your GitHub repo (git push --set-upstream origin unset_version_$VERSION) and open a PR for review.

Communication

We have a dedicated Working Group for VMware. You can find other people interested in this in the #ansible-vmware channel on libera.chat IRC. For more information about communities, meetings and agendas see https://github.com/ansible/community/wiki/VMware.


Download Details:

Author: ansible-collections
Source Code: https://github.com/ansible-collections/community.vmware

License: GPL-3.0, GPL-3.0 licenses found

#ansible 

Community.vmware: Ansible Collection for VMware
Fabiola  Auma

Fabiola Auma

1667965920

Ansible Service Broker: Implementation Of The Open Service Broker API

Ansible Service Broker

Ansible Service Broker is an implementation of the Open Service Broker API that manages applications defined in Ansible Playbook Bundles. Ansible Playbook Bundles (APB) are a method of defining applications via a collection of Ansible Playbooks built into a container with an Ansible runtime with the playbooks corresponding to a type of request specified in the Open Service Broker API Specification.

Check out the Keynote Demo from Red Hat Summit 2017

Features

Learn More:

Important Links

Getting Started on Kubernetes

Minikube makes it easy to get started with Kubernetes. Run the commands below individually or as a script to start a minikube VM that includes the service catalog and the broker. If you already have a Kubernetes cluster, skip the minikube command and proceed with the remaining ones as applicable.

Prerequisites:

Install

Run the following from the root of the cloned git repository.

#!/bin/env bash

# Adjust the version to your liking. Follow installation docs
# at https://github.com/kubernetes/minikube.
minikube start --bootstrapper kubeadm --kubernetes-version v1.9.4

# Install helm and tiller. See documentation for obtaining the helm
# binary. https://docs.helm.sh/using_helm/#install-helm
helm init

# Wait until tiller is ready before moving on
until kubectl get pods -n kube-system -l name=tiller | grep 1/1; do sleep 1; done

kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

# Adds the chart repository for the service catalog
helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

# Installs the service catalog
helm install svc-cat/catalog --name catalog --namespace catalog

# Wait until the catalog is ready before moving on
until kubectl get pods -n catalog -l app=catalog-catalog-apiserver | grep 2/2; do sleep 1; done
until kubectl get pods -n catalog -l app=catalog-catalog-controller-manager | grep 1/1; do sleep 1; done

./scripts/run_latest_k8s_build.sh

Use

Once everything is installed, you can interact with the service catalog using the svcat command. Learn how to install and use it here.

Getting Started on OpenShift

There are a few different ways to quickly get up and running with a cluster + ansible-service-broker:

Let's walk through an oc cluster up based setup.

Prerequisites

You will need a system setup for local OpenShift Origin Cluster Management

  • Your OpenShift Client binary (oc) must be >= v3.7.0-rc.0

If you are using minishift you should look at the minishift documentation to get the ansible service broker deployed and running.

Deploy a v3.10+ Openshift Origin Cluster with the Ansible Service Broker

Watch the full asciicast

  • Starting with Origin v3.10 it's as simple as running oc cluster up --enable=service-catalog,automation-service-broker.
    • Running oc cluster up --enable will give you a full list of features. You may find it helpful to also add persistent-volumes, registry, rhel-imagestreams, router, etc.
    • You might also want to add a public-hostname and routing-suffix to make it easier to access your provisioned applications as well.
    • Complete example would look like oc cluster up --routing-suffix=172.17.0.1.nip.io --public-hostname=172.17.0.1.nip.io --enable=service-catalog,router,registry,web-console,persistent-volumes,rhel-imagestreams,automation-service-broker
    • An in depth demo is available at https://youtu.be/IY1RINVsO40

Deploy a Pre v3.10 OpenShift Origin Cluster with the Ansible Service Broker

Watch the full asciicast

Download and execute our run_latest_build.sh script

Origin Version 3.7:

wget https://raw.githubusercontent.com/openshift/ansible-service-broker/master/scripts/run_latest_build.sh
chmod +x run_latest_build.sh
./run_latest_build.sh

At this point you should have a running cluster with the service-catalog and the Ansible Service Broker running.

Provision an instance of MediaWiki and PostgreSQL

  1. Log into OpenShift Web Console
  2. Create a new project 'apb-demo'
  3. Provision MediaWiki APB
    • Select the 'apb-demo' project
    • Enter a 'MediaWiki Admin User Password': 's3curepw'
    • Click 'Create'
  4. Provision PostgreSQL APB
    • Select the 'apb-demo' project
    • Leave 'PostgreSQL Password' blank, a random password will be generated
    • Choose a 'PostgreSQL Version'; either version will work.
    • Click 'Next'
    • Select 'Do not bind at this time' and then 'Create'
  5. Wait until both APBs have finished deploying, and you see pods running for MediaWiki and PostgreSQL

Bind MediaWiki to PostgreSQL

  1. Bind MediaWiki to PostgreSQL
    • Click on kebab menu for PostgreSQL
    • Select 'Create Binding' and then 'Bind'
    • Click on the link to the created secret
    • Click 'Add to Application'
    • Select 'mediawiki123' and 'Environment variables'
    • Click 'Save'
  2. View the route for MediaWiki and verify the wiki is up and running.
    • Observe that mediawiki123 is on deployment '#2', having been automatically redeployed

Versioning

Our release versions align with openshift/origin. For more detailed information see our version document.

Release Dates

KubernetesOpenShiftAnsible Service BrokerFeature FreezeRelease Date
1.73.7release-1.02017/9/42017/11/16
1.93.9release-1.12018/1/42018/3/28
1.103.10release-1.22018/4/42018/7/4*
1.113.11release-1.32018/7/4*2018/10/4*
1.124.0release-1.42018/10/4*2019/1/4*

Compatibility

APB Compatibility Matrix

ansible-service-brokerAPB runtime 1APB runtime 2
ansible-service-broker release-1.0, v3.7X
ansible-service-broker release-1.1, v3.9
ansible-service-broker HEAD

Key:

  • Supported.
  • X Will not work. Not supported.

Ansible Playbook Bundle images are built on the apb-base image. Starting with apb-base 1.1, a new APB runtime was introduced and captured in the label com.redhat.apb.runtime. Currently, there are two APB runtime versions:

  • APB runtime 1 - all APBs tagged release-1.0 as well as APBs with no "com.redhat.apb.runtime" label.
  • APB runtime 2 - all APBs tagged release-1.1 as well as APBs with label "com.redhat.apb.runtime"="2".

You can examine the runtime of a particular APB with docker inspect $APB --format "{{ index .Config.Labels \"com.redhat.apb.runtime\" }}". An APB without a "com.redhat.apb.runtime" label is APB runtime 1. For example:

$ docker inspect docker.io/ansibleplaybookbundle/mediawiki-apb:latest --format "{{ index .Config.Labels \"com.redhat.apb.runtime\" }}"
2

# No label on release-1.0
$ docker inspect docker.io/ansibleplaybookbundle/mediawiki-apb:release-1.0 --format "{{ index .Config.Labels \"com.redhat.apb.runtime\" }}"

Contributing

First, start with the Contributing Guide.

Contributions are welcome. Open issues for any bugs or problems you may run into, ask us questions on IRC (Freenode): #asbroker, or see what we are working on at our Trello Board.

If you want to run the test suite, when you are ready to submit a PR for example, make sure you have your development environment setup, and from the root of the project run:

# Check your go source files (gofmt, go vet, golint), build the broker, and run unit tests
make check

# Get helpful information about our make targets
make help

Download Details:

Author: openshift
Source Code: https://github.com/openshift/ansible-service-broker

License: Apache-2.0 license

#ansible 

Ansible Service Broker: Implementation Of The Open Service Broker API
Fabiola  Auma

Fabiola Auma

1667958240

Shiva: An Ansible Playbook to Provision A Host

What is Shiva?

Shiva is an Ansible playbook to provision a host to be used for playing CTF games, such as HackTheBox.

Quick start

  1. Create a Ubuntu 18.04 server host and ensure you have root access via SSH
  2. Install Ansible on your local machine
  3. Clone the repository to your local machine: git clone git@github.com:rastating/shiva.git
  4. Replace 127.0.0.1 with the IP address of the host to provision in the ubuntu_bionic section of inventory.ini
  5. Run the playbook: ansible-playbook -i inventory.ini -u root -l ubuntu_bionic playbook.yml

Why Shiva and not another Hindu deity?

When playing CTFs, I prefer to use cloud providers (such as Digital Ocean) rather than a local virtual machine running Kali. Although Kali is a great system, I find myself using only a small subset of the available tools and frequently find myself spinning up a cloud instance for persistence purposes anyway.

For this reason, I put together Shiva to automate building hosts in the cloud for pentesting / CTF purposes with my preferred environment. It's not a replacement for distros such as Kali and Parrot, but a way to build a more concise environment for similar purposes.

What operating systems can Shiva be used with?

Currently, Shiva has only been tested against Ubuntu 18.04.

What tools / packages are included?

NameCategoryHome Page
binwalkBinary Analysishttps://github.com/ReFirmLabs/binwalk
GDBBinary Analysishttps://www.gnu.org/software/gdb/
nasmBinary Analysishttps://www.nasm.us/
PEDABinary Analysishttps://github.com/longld/peda
pwntoolsBinary Analysishttps://github.com/Gallopsled/pwntools
Radare2Binary Analysishttps://rada.re/r/
RopperBinary Analysishttps://scoding.de/ropper/
FCrackZipCrackinghttp://oldhome.schmorp.de/marc/fcrackzip.html
hashcatCrackinghttps://hashcat.net/hashcat/
John The RipperCrackinghttps://www.openwall.com/john/
Hash IdentifierCryptohttps://code.google.com/archive/p/hash-identifier/
xortoolCryptohttps://github.com/hellman/xortool
GoEnvironmenthttps://golang.org/
Node.jsEnvironmenthttps://nodejs.org/en/
Oh My ZSHEnvironmenthttps://github.com/robbyrussell/oh-my-zsh
RubyEnvironmenthttps://www.ruby-lang.org
EmpireExploitationhttp://www.powershellempire.com/
MetasploitExploitationhttps://www.metasploit.com/
PowerSploitExploitationhttps://github.com/PowerShellMafia/PowerSploit
SearchSploitExploitationhttps://www.exploit-db.com/
SuperTTYExploitationhttps://github.com/bad-hombres/supertty
HydraPassword Attackshttps://github.com/vanhauser-thc/thc-hydra
MedusaPassword Attackshttps://github.com/jmk-foofus/medusa
NcrackPassword Attackshttps://nmap.org/ncrack/
SecListsPassword Attackshttps://github.com/danielmiessler/SecLists
CrackMapExecReconhttps://github.com/byt3bl33d3r/CrackMapExec
dnmasscanReconhttps://github.com/rastating/dnmasscan
DNSReconReconhttps://github.com/darkoperator/dnsrecon
HostileSubBruteforcerReconhttps://github.com/nahamsec/HostileSubBruteforcer
LinEnumReconhttps://github.com/rebootuser/LinEnum
MasscanReconhttps://github.com/robertdavidgraham/masscan
NmapReconhttps://nmap.org/
pspyReconhttps://github.com/DominicBreuker/pspy
Recon-ngReconhttps://bitbucket.org/LaNMaSteR53/recon-ng/src
ResponderReconhttps://github.com/SpiderLabs/Responder
SherlockReconhttps://github.com/sherlock-project/sherlock
SnmpcheckReconhttp://www.nothink.org/codes/snmpcheck
sslscanReconhttps://github.com/rbsec/sslscan
S3ScannerReconhttps://github.com/sa7mon/S3Scanner
theHarvesterReconhttps://github.com/laramies/theHarvester
tsharkReconhttps://www.wireshark.org/
ApacheServiceshttps://httpd.apache.org/
PostgreSQLServiceshttps://www.postgresql.org/
vsftpdServiceshttps://security.appspot.com/vsftpd.html
MS-SQL CLIToolshttps://docs.microsoft.com/en-us/sql/tools/mssql-cli
OpenVPNToolshttps://openvpn.net/
smbclientTools 
SocatTools 
Cookie MonsterWebhttps://github.com/DigitalInterruption/cookie-monster
DirbWebhttp://dirb.sourceforge.net/
EyeWitnessWebhttps://github.com/FortyNorthSecurity/EyeWitness
GobusterWebhttps://github.com/OJ/gobuster
MagescanWebhttps://github.com/steverobbins/magescan
NiktoWebhttps://cirt.net/Nikto2
ShockerWebhttps://github.com/nccgroup/shocker
sqlmapWebhttp://sqlmap.org/
wafw00fWebhttps://github.com/EnableSecurity/wafw00f
WhatWebWebhttps://github.com/urbanadventurer/WhatWeb
wfuzzWebhttps://github.com/xmendez/wfuzz
WPScanWebhttps://wpscan.org/
WPXFWebhttps://github.com/rastating/wordpress-exploit-framework

Several directories can also be found which include pre-compiled binaries and files to aid with exploitation and post-exploitation:

PathDescription
/usr/share/linux-binariesPre-compiled Linux binaries for post-exploitation (such as pspy)
/usr/share/webshellsWeb shells written in several languages
/usr/share/windows-binariesPre-compiled Windows binaries for post-exploitation (such as Mimikatz)
/usr/share/wordlistsWordlists to be used with password attacks / enumeration

What services does Shiva expose out of the box?

None; other than SSH. Apache, PostgreSQL and vsftpd are all installed, but the ports are not open to the public by default.

If you want to lock down where SSH is available out of the box, you can run the playbook with the --extra-vars switch to specify the trusted_ssh_ip variable.

For example, running the playbook with ansible-playbook -i inventory.ini -u root -l ubuntu_bionic --extra-vars "trusted_ssh_ip=10.8.0.1" playbook.yml would add a firewall rule that would only allow 10.8.0.1 to connect to port 22 and drop traffic from any other IP address.

Be cautious when doing this, a typo could lead to you locking yourself out!

Does Shiva create any user accounts?

Yes - an account named ftp is created without a default password. This is for use with vsftpd (see next section on connecting to vsftpd) but cannot be used to access the server via SSH.

How do I connect to vsftpd?

As the firewall does not expose vsftpd out of the box, you will need to open the following ports:

  • 21
  • 40000-50000

Only one user is authorised to access the FTP server out of the box (aptly named ftp). Before this user can authenticate, a password must be created for the account by running passwd ftp as root.

Note: the ftp user account is explicitly prohibited from logging into the server via SSH

If you want to allow other local user accounts to authenticate, you must:

  • Create a directory owned by root at: /srv/ftp/users/$USER
  • Create a directory owned by the user at /srv/ftp/users/$USER/files
  • Add the user's name to /etc/vsftpd.userlist

What aliases are available?

serve-this

An alias that will serve the current working directory using the Python SimpleHTTPServer module

Example:

# Serve /tmp/shiva on port 9090
cd /tmp/shiva
serve-this 9090

msfconsole

An alias which will first start the postgresql service prior to launching the standard msfconsole binary; allowing for Metasploit to have access to the database.

Note: the postgresql service is not automatically stopped after msfconsole is stopped

masscan_port_list

Parse the output of masscan into a CSV of unique port numbers.

masscan_ip_list

Parse the output of masscan to generate a list of unique IP addresses.

extract_unique_domains_from_dnsrecon_json

Parse a JSON file created by dnsrecon to extract the unique domain names found.

Example to extract all subdomains found that belong to google.com: extract_unique_domains_from_dnsrecon_json /path/to/dnsrecon.json google.com

Roadmap

There are three things I'd like to push with this going forward:

  • Increase the tool set (with useful tools, not just pushing up the count with useless stuff)
  • Setup Travis to add testing against the GitHub repository
  • Test against systems other than Ubuntu 18.04 and make adjustments to allow for a more robust list of base systems

If you can help with any of these and [more importantly] would like to - please feel free to submit pull requests or open issues with information!


Download Details:

Author: rastating
Source Code: https://github.com/rastating/shiva

License: GPL-3.0 license

#ansible 

Shiva: An Ansible Playbook to Provision A Host
Fabiola  Auma

Fabiola Auma

1667951040

Ansible Aur: Ansible Module to Manage Packages From The AUR

Ansible Collection - kewlfft.aur

Description

This collection includes an Ansible module to manage packages from the AUR.

Installation

Install the kewlfft.aur collection from Ansible Galaxy

To install this collection from Ansible Galaxy, run the following command:

ansible-galaxy collection install kewlfft.aur

Alternatively, you can include the collection in a requirements.yml file and then run ansible-galaxy collection install -r requirements.yml. Here is an example of requirements.yml file:

collections:
  - name: kewlfft.aur

Install the kewlfft.aur collection from the AUR

The kewlfft.aur collection is also available in the AUR as the ansible-collection-kewlfft-aur package.

Install the kewlfft.aur collection locally for development

If you want to test changes to the source code, run the following commands from the root of this git repository to locally build and install the collection:

ansible-galaxy collection build --force
ansible-galaxy collection install --force "./kewlfft-aur-$(cat galaxy.yml | grep version: | awk '{print $2}').tar.gz"

Install the aur module as a local custom module

Alternatively, you may manually install the aur module itself as a local custom module instead of installing the module through the kewlfft.aur Ansible collection. However, it is recommended to use kewlfft.aur collection unless you have a good reason not to. Here are the commands to install the aur module as a local custom module:

# Create the user custom module directory
mkdir ~/.ansible/plugins/modules

# Install the aur module into the user custom module directory
curl -o ~/.ansible/plugins/modules/aur.py https://raw.githubusercontent.com/kewlfft/ansible-aur/master/plugins/modules/aur.py

kewlfft.aur.aur Module

Ansible module to use some Arch User Repository (AUR) helpers as well as makepkg. The following helpers are supported and automatically selected, if present, in the order listed below:

makepkg will be used if no helper was found or if it is explicitly specified:

Options

ParameterChoices/DefaultComments
name Name or list of names of the package(s) to install or upgrade.
statepresent, latestDesired state of the package, 'present' skips operations if the package is already installed.
upgradeyes, noWhether or not to upgrade whole system.
update_cacheyes, noWhether or not to refresh the packages cache
useauto, yay, paru, pacaur, trizen, pikaur, aurman, makepkgThe tool to use, 'auto' uses the first known helper found and makepkg as a fallback.
extra_argsnullA list of additional arguments to pass directly to the tool. Cannot be used in 'auto' mode.
aur_onlyyes, noLimit helper operation to the AUR.
local_pkgbuildLocal directory with PKGBUILD, nullOnly valid with makepkg or pikaur. Don't download the package from AUR. Build the package using a local PKGBUILD and the other build files.
skip_pgp_checkyes, noOnly valid with makepkg. Skip PGP signatures verification of source file, useful when installing packages without GnuPG properly configured.
ignore_archyes, noOnly valid with makepkg. Ignore a missing or incomplete arch field, useful when the PKGBUILD does not have the arch=('yourarch') field.

Note

  • Either name or upgrade is required, both cannot be used together.
  • In the use=auto mode, makepkg is used as a fallback if no known helper is found.

Usage

Notes

  • The scope of this module is installation and update from the AUR; for package removal or for updates from the repositories, it is recommended to use the official pacman module.
  • The --needed parameter of the helper is systematically used, it means if a package is up-to-date, it is not built and reinstalled.

Create the "aur_builder" user

While Ansible expects to SSH as root, makepkg or AUR helpers do not allow executing operations as root, they fail with "you cannot perform this operation as root". It is therefore recommended to create a user, which is non-root but has no need for password with pacman in sudoers, let's call it aur_builder.

This user can be created in an Ansible task with the following actions:

- name: Create the `aur_builder` user
  become: yes
  ansible.builtin.user:
    name: aur_builder
    create_home: yes
    group: wheel

- name: Allow the `aur_builder` user to run `sudo pacman` without a password
  become: yes
  ansible.builtin.lineinfile:
    path: /etc/sudoers.d/11-install-aur_builder
    line: 'aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman'
    create: yes
    validate: 'visudo -cf %s'

Fully Qualified Collection Names (FQCNs)

In order to use an Ansible module that is distributed in a collection, you must use its FQCN. This corresponds to "the full definition of a module, plugin, or role hosted within a collection, in the form namespace.collection.content_name" (Source). In this case, the aur module resides in the aur collection which is under the kewlfft namespace, so its FQCN is kewlfft.aur.aur.

Please note that this does not apply if you installed the aur module as a local custom module. Due to the nature of local custom modules, you can simply use the module's short name: aur.

Examples

Use it in a task, as in the following examples:

# This task uses the module's short name instead of its FQCN (Fully Qualified Collection Name).
# Use the short name if the module was installed as a local custom module.
# Otherwise, if it was installed through the `kewlfft.aur` collection, this task will fail.
- name: Install trizen using makepkg if it isn't installed already
  aur:
    name: trizen
    use: makepkg
    state: present
  become: yes
  become_user: aur_builder

# This task uses the `aur` module's FQCN.
- name: Install trizen using makepkg if it isn't installed already
  kewlfft.aur.aur:
    name: trizen
    use: makepkg
    state: present
  become: yes
  become_user: aur_builder

- name: Install package_name_1 and package_name_2 using yay
  kewlfft.aur.aur:
    use: yay
    name:
      - package_name_1
      - package_name_2

# Note: Dependency resolution will still include repository packages.
- name: Upgrade the system using yay, only act on AUR packages.
  kewlfft.aur.aur:
    upgrade: yes
    use: yay
    aur_only: yes

# Skip if it is already installed
- name: Install gnome-shell-extension-caffeine-git using pikaur and a local PKGBUILD.
  kewlfft.aur.aur:
    name: gnome-shell-extension-caffeine-git
    use: pikaur
    local_pkgbuild: {{ role_path }}/files/gnome-shell-extension-caffeine-git
    state: present
  become: yes
  become_user: aur_builder

Download Details:

Author: kewlfft
Source Code: https://github.com/kewlfft/ansible-aur

License: GPL-3.0 license

#ansible 

Ansible Aur: Ansible Module to Manage Packages From The AUR
Fabiola  Auma

Fabiola Auma

1667943420

Ansible: How to Use Ansible To Manage Windows Desktops

Using Ansible to manage Windows desktops

As part of this project the following modules have been implemented: - wakeonlan - wait_for_connection - win_defrag - win_product_facts - win_shortcut - win_wakeonlan

Configuring the system for Powershell Remoting

The following actions have to be taken to enable WinRM Powershell remoting.

Enable WinRM

Start Powershell (Run as Administrator) and run the following command:

WinRM qc

Answer yes on each question asked.

Allow Powershell script execution

Start Powershell (Run as Administrator) and run the following command:

Set-ExecutionPolicy

Enter the policy to be used: Bypass

Answer yes when asked to change the policy.

(Or use proper client certificates, which we plan to do)

Allow Powershell remoting for Ansible

Start Powershell (Run as Administrator) and run the following command:

ConfigureRemotingForAnsible.ps1 -CertValidityDays 3650 -EnableCredSSP

Enable Wake-on-LAN (WoL)

In order to automatically turn on systems when doing maintenance, we configured the systems to support Wake-on-LAN. Most systems are configured this way automatically, however in some cases they need specific changes to make them work as we like.

BIOS settings

Boot the system using the F1 key pressed to enter the BIOS.

Inside the (Lenovo) BIOS go to Startup > Automatic Boot Sequence and move the Network entries down using the minus key (-). Ensure that the first entry is the local boot disk.

Save the configuration using the F10 key and select Yes.

Windows settings

No specific configuration is needed to make Wake-on-LAN work on the Lenovo systems in Windows 10.

Using Ansible

More information is available from: http://docs.ansible.com/ansible/intro_windows.html

Capabilities

The following things we can manage using Ansible today:

Turn on systems (using Wake-On-Lan)

Collect information from the system (e.g. Name, MAC address, IP addres, hardware) into a CSV

Manage energy settings

Apply system updates

Installing and removing software (incl. everything from Ninite)

Enable/disable system services

Apply/merge registry settings

Change mouse pointer to Extra Large

Modifying the start menu

Setting up International(ization) and Keyboard Layout

Customize desktop icons

Defragment filesystem(s)

Install printer drivers and configure printers

Still need to be implement:

Missing automation

Customize start menu (disable tiles, change start menu settings)

Customize task bar (pinning apps in task bar in specific order)

Customize system tray

Customize desktop icons (position desktop icons)

Missing facts

Disk information (size, free-space) using win_disk_facts

Security

Limit headphone loudness for children

Run applications as another user with unknown password (should be possible now)

Instructions

Existing Ansible playbooks are available from: https://github.com/crombeen/ansible

Turning on desktops using WoL

$ ansible-playbook -k wakeonlan.yml

Collect information (creates inventory in CSV format)

$ ansible-playbook -k collect.yml

Manage software

$ ansible-playbook -k provision.yml
$ ansible-playbook -k software.yml
$ ansible-playbook -k cleanup.yml

Manage system configuration

$ ansible-playbook -k config.yml
$ ansible-playbook -k desktop.yml

Manage local users

$ ansible-playbook -k users.yml

Manage RDP and OneDrive

$ ansible-playbook -k rdesktop.yml
$ ansible-playbook -k onedrive.yml

Run everything

$ ansible-playbook -k site.yml

Problems

Here is a list of problems today:

Often command line desktop management was an afterthought in Windows, not designed with it in mind.

A lot of (desktop) manipulations require registry edits because out-of-the-box cmdlets do not exist.

Hard to predict how registry modifications will survive Windows 10 updates.

Powershell is a big improvement over cmd.exe, however it feels like Perl 4 (1993) more than anything modern (encountered various inconsistencies and design issues).

Since we have Windows 10 Home OEM licenses, Microsoft’s solution (Active Directory and Group Policies) is not an option, and we prefer open tooling and manageable actions.

Microsoft disables WinRM on every Windows 10 upgrade (every 6 months)

Resources

More resources related to Powershell and Ansible-integration below:

Ansible

Ansible Windows support

Ansible Windows modules

Powershell DSC modules - DSC community auto-generated modules

Powershell

Powershell 101 from a Linux guy


Download Details:

Author: crombeen
Source Code: https://github.com/crombeen/ansible

#ansible 

Ansible: How to Use Ansible To Manage Windows Desktops
Fabiola  Auma

Fabiola Auma

1667935860

Community.zabbix: Zabbix Collection for ansible

Zabbix collection for Ansible

Introduction

This repo hosts the community.zabbix Ansible Collection.

The collection includes a variety of Ansible content to help automate the management of resources in Zabbix.

Included content

Click on the name of a plugin or module to view that content's documentation:

Installation

Requirements

Each component in this collection requires additional dependencies. Review components you are interested in by visiting links present in the Included content section.

This is especially important for some of the Zabbix roles that require you to install additional standalone roles from Ansible Galaxy.

For the majority of modules, however, you can get away with just:

pip install zabbix-api

Ansible 2.10 and higher

With the release of Ansible 2.10, modules have been moved into collections. With the exception of ansible.builtin modules, this means additonal collections must be installed in order to use modules such as seboolean (now ansible.posix.seboolean). The following collections are now frequently required: ansible.posix and community.general. Installing the collections:

ansible-galaxy collection install ansible.posix
ansible-galaxy collection install community.general
ansible-galaxy collection install ansible.netcommon

Installing the Collection from Ansible Galaxy

Before using the Zabbix collection, you need to install it with the Ansible Galaxy CLI:

ansible-galaxy collection install community.zabbix

You can also include it in a requirements.yml file along with other required collections and install them via ansible-galaxy collection install -r requirements.yml, using the format:

---
collections:
  - name: community.zabbix
    version: 1.8.0
  - name: ansible.posix
    version: 1.3.0
  - name: community.general
    version: 3.7.0

Upgrading collection

Make sure to read UPGRADE document before installing newer version of this collection.

Usage

Please note that these are not working examples. For documentation on how to use content included in this collection, refer to the links in the Included content section.

To use a module or role from this collection, reference them with their Fully Qualified Collection Namespace (FQCN) like so:

---
- name: Using Zabbix collection to install Zabbix Agent
  hosts: localhost
  roles:
    - role: community.zabbix.zabbix_agent
      zabbix_agent_server: zabbix.example.com
      ...

- name: If Zabbix WebUI runs on non-default (zabbix) path, e.g. http://<FQDN>/zabbixeu
  set_fact:
    ansible_zabbix_url_path: 'zabbixeu'

- name: Using Zabbix collection to manage Zabbix Server's elements with username/password
  hosts: zabbix.example.com
  vars:
    ansible_network_os: community.zabbix.zabbix
    ansible_connection: httpapi
    ansible_httpapi_port: 80
    ansible_httpapi_use_ssl: false
    ansible_httpapi_validate_certs: false
    ansible_user: Admin
    ansible_httpapi_pass: zabbix
  tasks:
    - name: Ensure host is monitored by Zabbix
      community.zabbix.zabbix_host:
        ...

- name: Using Zabbix collection to manage Zabbix Server's elements with authentication key
  hosts: zabbix.example.net
  vars:
    ansible_network_os: community.zabbix.zabbix
    ansible_connection: httpapi
    ansible_httpapi_port: 80
    ansible_httpapi_use_ssl: false
    ansible_httpapi_validate_certs: false
    ansible_zabbix_auth_key: 8ec0d52432c15c91fcafe9888500cf9a607f44091ab554dbee860f6b44fac895
  tasks:
    - name: Ensure host is monitored by Zabbix
      community.zabbix.zabbix_host:
        ...

Or you include collection name community.zabbix in the playbook's collections element, like this:

---
- name: Using Zabbix collection
  hosts: localhost
  collections:
    - community.zabbix

  roles:
    - role: zabbix_agent
      zabbix_agent_server: zabbix.example.com
      ...

- name: Using Zabbix collection to manage Zabbix Server's elements with username/password
  hosts: zabbix.example.com
  vars:
    ansible_network_os: community.zabbix.zabbix
    ansible_connection: httpapi
    ansible_httpapi_port: 80
    ansible_httpapi_use_ssl: false
    ansible_httpapi_validate_certs: false
    ansible_user: Admin
    ansible_httpapi_pass: zabbix
  tasks:
    - name: Ensure host is monitored by Zabbix
      zabbix.zabbix_host:
        ...

- name: Using Zabbix collection to manage Zabbix Server's elements with authentication key
  hosts: zabbix.example.net
  vars:
    ansible_network_os: community.zabbix.zabbix
    ansible_connection: httpapi
    ansible_httpapi_port: 80
    ansible_httpapi_use_ssl: false
    ansible_httpapi_validate_certs: false
    ansible_zabbix_auth_key: 8ec0d52432c15c91fcafe9888500cf9a607f44091ab554dbee860f6b44fac895
  tasks:
    - name: Ensure host is monitored by Zabbix
      zabbix_host:
        ...

Supported Zabbix versions

Main priority is to support Zabbix releases which have official full support from Zabbix LLC. Please checkout the versions at Zabbix Life Cycle & Release Policy page.

We aim to cover at least two LTS releases. For example, currently we support LTS 4.0 + 5.0 and with LTS 6.0 we will drop 4.0. But we do our best to also include the latest point releases - for example currently this is 5.4 which should be supperseeded by 6.2 then.

Support for Zabbix LTS versions will be dropped with Major releases of the collection and mostly affect modules. Each role is following its unique support matrix. You should always consult documentation of roles in docs/ directory.

If you find any inconsistencies with the version of Zabbix you are using, feel free to open a pull request or an issue and we will try to address it as soon as possible. In case of pull requests, please make sure that your changes will not break any existing functionality for currently supported Zabbix releases.

Collection life cycle and support

See RELEASE document for more information regarding life cycle and support for the collection.

Contributing

See CONTRIBUTING for more information about how to contribute to this repository.

Please also feel free to stop by our Gitter community.


Download Details:

Author: ansible-collections
Source Code: https://github.com/ansible-collections/community.zabbix

License: View license

#ansible 

Community.zabbix: Zabbix Collection for ansible

How to Build A Kubernetes Cluster using Kubeadm Via ansible

Kubeadm Ansible Playbook

Build a Kubernetes cluster using Ansible with kubeadm. The goal is easily install a Kubernetes cluster on machines running:

  • Ubuntu 16.04
  • CentOS 7
  • Debian 9

System requirements:

  • Deployment environment must have Ansible 2.4.0+
  • Master and nodes must have passwordless SSH access

Usage

Add the system information gathered above into a file called hosts.ini. For example:

[master]
192.16.35.12

[node]
192.16.35.[10:11]

[kube-cluster:children]
master
node

If you're working with ubuntu, add the following properties to each host ansible_python_interpreter='python3':

[master]
192.16.35.12 ansible_python_interpreter='python3'

[node]
192.16.35.[10:11] ansible_python_interpreter='python3'

[kube-cluster:children]
master
node

Before continuing, edit group_vars/all.yml to your specified configuration.

For example, I choose to run flannel instead of calico, and thus:

# Network implementation('flannel', 'calico')
network: flannel

Note: Depending on your setup, you may need to modify cni_opts to an available network interface. By default, kubeadm-ansible uses eth1. Your default interface may be eth0.

After going through the setup, run the site.yaml playbook:

$ ansible-playbook site.yaml
...
==> master1: TASK [addon : Create Kubernetes dashboard deployment] **************************
==> master1: changed: [192.16.35.12 -> 192.16.35.12]
==> master1:
==> master1: PLAY RECAP *********************************************************************
==> master1: 192.16.35.10               : ok=18   changed=14   unreachable=0    failed=0
==> master1: 192.16.35.11               : ok=18   changed=14   unreachable=0    failed=0
==> master1: 192.16.35.12               : ok=34   changed=29   unreachable=0    failed=0

The playbook will download /etc/kubernetes/admin.conf file to $HOME/admin.conf.

If it doesn't work download the admin.conf from the master node:

$ scp k8s@k8s-master:/etc/kubernetes/admin.conf .

Verify cluster is fully running using kubectl:


$ export KUBECONFIG=~/admin.conf
$ kubectl get node
NAME      STATUS    AGE       VERSION
master1   Ready     22m       v1.6.3
node1     Ready     20m       v1.6.3
node2     Ready     20m       v1.6.3

$ kubectl get po -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
etcd-master1                            1/1       Running   0          23m
...

Resetting the environment

Finally, reset all kubeadm installed state using reset-site.yaml playbook:

$ ansible-playbook reset-site.yaml

Additional features

These are features that you could want to install to make your life easier.

Enable/disable these features in group_vars/all.yml (all disabled by default):

# Additional feature to install
additional_features:
  helm: false
  metallb: false
  healthcheck: false

Helm

This will install helm in your cluster (https://helm.sh/) so you can deploy charts.

MetalLB

This will install MetalLB (https://metallb.universe.tf/), very useful if you deploy the cluster locally and you need a load balancer to access the services.

Healthcheck

This will install k8s-healthcheck (https://github.com/emrekenci/k8s-healthcheck), a small application to report cluster status.

Utils

Collection of scripts/utilities

Vagrantfile

This Vagrantfile is taken from https://github.com/ecomm-integration-ballerina/kubernetes-cluster and slightly modified to copy ssh keys inside the cluster (install https://github.com/dotless-de/vagrant-vbguest is highly recommended)

Tips & Tricks

Specify user for Ansible

If you use vagrant or your remote user is root, add this to hosts.ini

[master]
192.16.35.12 ansible_user='root'

[node]
192.16.35.[10:11] ansible_user='root'

Access Kubernetes Dashboard

As of release 1.7 Dashboard no longer has full admin privileges granted by default, so you need to create a token to access the resources:

$ kubectl -n kube-system create sa dashboard
$ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard
$ kubectl -n kube-system get sa dashboard -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2017-11-27T17:06:41Z
  name: dashboard
  namespace: kube-system
  resourceVersion: "69076"
  selfLink: /api/v1/namespaces/kube-system/serviceaccounts/dashboard
  uid: 56b880bf-d395-11e7-9528-448a5ba4bd34
secrets:
- name: dashboard-token-vg52j

$ kubectl -n kube-system describe secrets dashboard-token-vg52j
...
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tdmc1MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTZiODgwYmYtZDM5NS0xMWU3LTk1MjgtNDQ4YTViYTRiZDM0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.bVRECfNS4NDmWAFWxGbAi1n9SfQ-TMNafPtF70pbp9Kun9RbC3BNR5NjTEuKjwt8nqZ6k3r09UKJ4dpo2lHtr2RTNAfEsoEGtoMlW8X9lg70ccPB0M1KJiz3c7-gpDUaQRIMNwz42db7Q1dN7HLieD6I4lFsHgk9NPUIVKqJ0p6PNTp99pBwvpvnKX72NIiIvgRwC2cnFr3R6WdUEsuVfuWGdF-jXyc6lS7_kOiXp2yh6Ym_YYIr3SsjYK7XUIPHrBqWjF-KXO_AL3J8J_UebtWSGomYvuXXbbAUefbOK4qopqQ6FzRXQs00KrKa8sfqrKMm_x71Kyqq6RbFECsHPA

$ kubectl proxy

Copy and paste the token from above to dashboard.

Login the dashboard:


Download Details:

Author: kairen
Source Code: https://github.com/kairen/kubeadm-ansible

License: Apache-2.0 license

#kubernetes #ansible 

How to Build A Kubernetes Cluster using Kubeadm Via ansible
Fabiola  Auma

Fabiola Auma

1667928360

Ansible Role: AWX (open Source Ansible tower)

Ansible Role: AWX (open source Ansible Tower)

DEPRECATED: This role has been deprecated. AWX installation is a lot different than it was when I first created the role, and continues evolving. Please follow the official install guide and if you need automation around it, please consider the awx-operator.

Installs and configures AWX, the open source version of Ansible Tower.

Requirements

Before this role runs, assuming you want the role to completely set up AWX using it's included installer, you need to make sure the following AWX dependencies are installed:

DependencySuggested Role
EPEL repo (RedHat OSes only)geerlingguy.repo-epel
Gitgeerlingguy.git
Ansiblegeerlingguy.ansible
Dockergeerlingguy.docker
Python Pipgeerlingguy.pip
Node.js (10.x)geerlingguy.nodejs

See this role's molecule/default/converge.yml playbook for an example that works across many different OSes.

Role Variables

Available variables are listed below, along with default values (see defaults/main.yml):

awx_repo: https://github.com/ansible/awx.git
awx_repo_dir: "~/awx"
awx_version: devel
awx_keep_updated: true

Variables to control what version of AWX is checked out and installed.

awx_run_install_playbook: true

By default, this role will run the installation playbook included with AWX (which builds a set of containers and runs them). You can disable the playbook run by setting this variable to false.

Dependencies

None.

Example Playbook

- hosts: awx-centos
  become: true

  vars:
    nodejs_version: "10.x"
    docker_install_compose: false
    pip_install_packages:
      - name: docker
      - name: docker-compose

  roles:
    - geerlingguy.repo-epel
    - geerlingguy.git
    - geerlingguy.pip
    - geerlingguy.ansible
    - geerlingguy.docker
    - geerlingguy.nodejs
    - geerlingguy.awx

After AWX is installed, you can log in with the default username admin and password password.

Author Information

This role was created in 2017 by Jeff Geerling, author of Ansible for DevOps.


Download Details:

Author: geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-awx

License: MIT license

#ansible 

Ansible Role: AWX (open Source Ansible tower)