1560489305
With the popularization of containerized applications and microservices, server automation now plays an essential role in systems administration. It is also a way to establish standard procedures for new servers and reduce human error.
This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu 18.04. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system.
While you can complete this setup manually, using a configuration management tool like Ansible to automate the process will save you time and establish standard procedures that can be repeated through tens to hundreds of nodes. Ansible offers a simple architecture that doesn’t require special software to be installed on nodes, and it provides a robust set of features and built-in modules which facilitate writing automation scripts.
In order to execute the automated setup provided by the playbook discussed in this guide, you’ll need:
To make sure Ansible is able to execute commands on your nodes, run the following command from your Ansible Control Node:
ansible -m ping all
This command will use Ansible’s built-in [ping](https://docs.ansible.com/ansible/latest/modules/ping_module.html)
module to run a connectivity test on all nodes from your default inventory file, connecting as the current system user. The ping
module will test whether:
If you installed and configured Ansible correctly, you will get output similar to this:
Output
server1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
server2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
server3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Once you get a pong
reply back from a host, it means you’re ready to run Ansible commands and playbooks on that server.
This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu 18.04.
Running this playbook will perform the following actions on your Ansible hosts:
aptitude
, which is preferred by Ansible as an alternative to the apt
package manager.apt
sources.pip
.default_container_image
from Docker Hub.create_containers
field, each using the image defined by default_container_image
, and execute the command defined in default_container_command
in each new container.Once the playbook has finished running, you will have a number of containers created based on the options you defined within your configuration variables.
To get started, we’ll download the contents of the playbook to your Ansible Control Node.
Use curl
to download this playbook from the command line:
curl -L https://raw.githubusercontent.com/do-community/ansible-playbooks/master/docker/ubuntu1804.yml -o docker_ubuntu.yml
This will download the contents of the playbook to a file named docker_ubuntu.yml
in your current working directory. You can examine the contents of the playbook by opening the file with your command-line editor of choice:
nano docker_ubuntu.yml
Once you’ve opened the playbook file, you should notice a section named vars
with variables that require your attention:
docker_ubuntu.yml
. . .
vars:
create_containers: 4
default_container_name: docker
default_container_image: ubuntu
default_container_command: sleep 1d
. . .
Here’s what these variables mean:
create_containers
: The number of containers to create.default_container_name
: Default container name.default_container_image
: Default Docker image to be used when creating containers.default_container_command
: Default command to run on new containers.Once you’re done updating the variables inside docker_ubuntu.yml
, save and close the file. If you used nano
, do so by pressing CTRL + X
, Y
, then ENTER
.
You’re now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on all
servers from your inventory, by default. We can use the -l
flag to make sure that only a subset of servers, or a single server, is affected by the playbook. To execute the playbook only on server1
, you can use the following command:
ansible-playbook docker_ubuntu.yml -l server1
You will get output similar to this:
Output
...
TASK [Add Docker GPG apt Key] ********************************************************************************************************************
changed: [server1]
TASK [Add Docker Repository] *********************************************************************************************************************
changed: [server1]
TASK [Update apt and install docker-ce] **********************************************************************************************************
changed: [server1]
TASK [Install Docker Module for Python] **********************************************************************************************************
changed: [server1]
TASK [Pull default Docker image] *****************************************************************************************************************
changed: [server1]
TASK [Create default containers] *****************************************************************************************************************
changed: [server1] => (item=1)
changed: [server1] => (item=2)
changed: [server1] => (item=3)
changed: [server1] => (item=4)
PLAY RECAP ***************************************************************************************************************************************
server1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
When the playbook is finished running, log in via SSH to the server provisioned by Ansible and run docker ps -a
to check if the containers were successfully created:
sudo docker ps -a
You should see output similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3fe9bfb89cf ubuntu "sleep 1d" 5 minutes ago Created docker4
8799c16cde1e ubuntu "sleep 1d" 5 minutes ago Created docker3
ad0c2123b183 ubuntu "sleep 1d" 5 minutes ago Created docker2
b9350916ffd8 ubuntu "sleep 1d" 5 minutes ago Created docker1
This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server.
You can find the Docker playbook featured in this tutorial in the ansible-playbooks repository To copy or download the script contents directly, click the Raw button towards the top of the script, or click here to view the raw contents directly.
The full contents are also included here for your convenience:
docker_ubuntu.yml
---
- hosts: all
become: true
vars:
create_containers: 4
default_container_name: docker
default_container_image: ubuntu
default_container_command: sleep 1d
tasks:
- name: Install aptitude using apt
apt: name=aptitude state=latest update_cache=yes force_apt_get=yes
- name: Install required system packages
apt: name={{ item }} state=latest update_cache=yes
loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common', 'python3-pip', 'virtualenv', 'python3-setuptools']
- name: Add Docker GPG apt Key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker Repository
apt_repository:
repo: deb https://download.docker.com/linux/ubuntu bionic stable
state: present
- name: Update apt and install docker-ce
apt: update_cache=yes name=docker-ce state=latest
- name: Install Docker Module for Python
pip:
name: docker
# Pull image specified by variable default_image from the Docker Hub
- name: Pull default Docker image
docker_image:
name: "{{ default_container_image }}"
source: pull
# Creates the number of containers defined by the variable create_containers, using default values
- name: Create default containers
docker_container:
name: "{{ default_container_name }}{{ item }}"
image: "{{ default_container_image }}"
command: "{{ default_container_command }}"
state: present
with_sequence: count={{ create_containers }}
Feel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the [docker_image](https://morioh.com/p/4e215b4c64de)
module to push images to Docker Hub or the [docker_container](https://morioh.com/p/c310f568c20d)
module to set up container networks.
Automating your infrastructure setup can not only save you time, but it also helps to ensure that your servers will follow a standard configuration that can be customized to your needs. With the distributed nature of modern applications and the need for consistency between different staging environments, automation like this has become a central component in many teams’ development processes.
In this guide, we demonstrated how to use Ansible to automate the process of installing and setting up Docker on a remote server. Because each individual typically has different needs when working with containers, we encourage you to check out the official Ansible documentation for more information and use cases of the docker_container
Ansible module.
#docker #ubuntu
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1595429220
Microsoft Teams is a communication platform used for Chat, Calling, Meetings, and Collaboration. Generally, it is used by companies and individuals working on projects. However, Microsoft Teams is available for macOS, Windows, and Linux operating systems available now.
In this tutorial, we will show you how to install Microsoft Teams on Ubuntu 20.04 machine. By default, Microsoft Teams package is not available in the Ubuntu default repository. However we will show you 2 methods to install Teams by downloading the Debian package from their official website, or by adding the Microsoft repository.
01- First, navigate to teams app downloads page and grab the Debian binary installer. You can simply obtain the URL and pull the binary using wget
;
$ VERSION=1.3.00.5153
$ wget https://packages.microsoft.com/repos/ms-teams/pool/main/t/teams/teams_${VERSION}_amd64.deb
#linux #ubuntu #install microsoft teams on ubuntu #install teams ubuntu #microsoft teams #teams #teams download ubuntu #teams install ubuntu #ubuntu install microsoft teams #uninstall teams ubuntu
1626801695
VirtualBox is a widely known and open-source tool offered by Oracle. It’s a cross platform virtualization application that can be installed on any system that allows users to run a number of guest operating systems on single machine over a virtual system.
Means, with the usage of VirtualBox on your system, you can easily install many operating systems and can run them simultaneously which can be helpful to develop, demonstrate, deploy and test applications on a single machine.
How to install VirtualBox On Ubuntu 18.04, 20.04 and 21.04?
Since the virtualization system offered by Oracle is available for cross platforms, even Ubuntu users can install the app and take its benefits. In order to install VirtualBox in Ubuntu variants, there’s a number of approaches that can be helpful.
Installing using default repository
sudo apt install virtualbox
Install using official DEB package
Visit the Virtualbox download page and download DEB package in your machine, and run the following command to install the package using ATP command.
sudo apt install ./virtualbox-6.1_6.1.22-144080_Ubuntu_bionic_amd64.deb
for more approaches and information, Read Here
#install virtual box on ubuntu #install virtual box on ubuntu 18.04 #install virtual box on ubuntu 20.04 #install virtual box on ubuntu21.04
1595515560
TeamViewer is a cross-platform, proprietary application that allows a user to remotely connect to a workstation, transfer files, and have online meetings. In this tutorial, we will walk you through how to install TeamViewer on Ubuntu 20.04 Desktop through the command line.
Before continuing with this tutorial, make sure you are logged in as a user with sudo
privileges.
01- To install TeamViewer, first, download the TeamViewer .deb
package. So, open the Terminal and run the following wget
command.
$ wget https://download.teamviewer.com/download/linux/teamviewer_amd64.deb
02- Once you have downloaded the TeamViewer‘s Debian package, execute the following command to install Teamviewer:
$ sudo apt install ./teamviewer_amd64.deb
The system will prompt you with a [Y/n]
option. Type ‘Y
‘ and hit the enter
key in order for to continue the installation.
03- Once the installation is done, you can launch TeamViewer either by typing the command teamviewer
in your terminal or by clicking on the TeamViewer
icon (Activities -> TeamViewer
).
04- A pop-up License Agreement will be displayed. To proceed, click on the Accept License Agreement
button.
#linux #ubuntu #install teamviewer #install teamviewer ubuntu #teamviewer #teamviewer ubuntu #teamviewer ubuntu install #ubuntu install teamviewer
1592209410
pgAdmin is the leading graphical Open Source management, development and administration tool for PostgreSQL. pgAdmin4 is a rewrite of the popular pgAdmin3 management tool for the PostgreSQL database.
In this tutorial, we are going to show you how to install pgAdmin4 in Server Mode as a web application using apache2 and Wsgi module on Ubuntu 20.04 LTS.
#databases #linux #ubuntu #install pgadmin4 #install pgadmin4 ubuntu #install pgadmin4 ubuntu 20 #pgadmin4 #ubuntu pgadmin4 #ubuntu pgadmin4 install