1675357163
Install Odoo 15 on Ubuntu 20.04 with Nginx – AWS. In this tutorial you are going to learn how to install and setup Odoo with Nginx reverse proxy and connect it with PostgreSQL in Amazon RDS.
Odoo is a management self hosted software to run a business with a top notch user experience. The applications within Odoo are perfectly integrated with each other, allowing you to fully automate your business processes easily.
SSH to your EC2 Instance and perform the steps listed below.
Let’s start by updating the local package index with the following command to the latest available version.
sudo apt update
sudo apt upgrade
Once the update is done you can start installing the required packages.
Wkhtmltopdf is package that is used to render HTML to PDF and other image formats. If you are using Odoo to print PDF reports you should install wkhtmltopdf tool. The recommended version for Odoo is 0.12.6. This is not included in the default Ubuntu 20.04 repository.
So we shall download the package and install it.
wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.focal_amd64.debsudo apt install ./wkhtmltox_0.12.6-1.focal_amd64.deb
The easiest way to install Odoo 15 is by adding the Nightly build repositories to your sources and proceed installation.
wget -O - https://nightly.odoo.com/odoo.key | apt-key add -
echo "deb http://nightly.odoo.com/15.0/nightly/deb/ ./" >> /etc/apt/sources.list
Update the apt cache and proceed install Odoo 15.
sudo apt update
sudo apt install odoo
Once the installation is complete Odoo is started automatically as a service.
To make sure Odoo is running you can check the status using the following command.
sudo service odoo statusOutput
● odoo.service - Odoo Open Source ERP and CRM
Loaded: loaded (/lib/systemd/system/odoo.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-01-03 06:31:14 UTC; 31s ago
Main PID: 9375 (odoo)
Tasks: 6 (limit: 4395)
CGroup: /system.slice/odoo.service
└─9375 /usr/bin/python3 /usr/bin/odoo --config /etc/odoo/odoo.conf --logfile /var/log/odoo/odoo-server.log
Jan 03 06:31:14 odoo systemd[1]: Started Odoo Open Source ERP and CRM.
This indicates Odoo is started and running successfully.
Now you can enable Odoo service to start on system boot.
sudo systemctl enable --now odoo
Now you can configure Odoo to use remote database like Cloud SQL or Amazon RDS.
sudo nano /etc/odoo/odoo.conf
Replace the highlighted values corresponding to your Cloud SQL values.
[options]
; This is the password that allows database operations:
; admin_passwd = admin
db_host = RDS_ENDPOINT
db_port = False
db_user = RDS_user
db_password = RDS_user_password
;addons_path = /usr/lib/python3/dist-packages/odoo/addons
Restart Odoo.
sudo service odoo restart
Install Nginx using the followng command.
sudo apt install nginx
Remove default Nginx configurations.
sudo rm /etc/nginx/sites-enabled/default
sudo rm /etc/nginx/sites-available/default
Create a new Nginx configuration for Odoo in the sites-available
directory.
sudo nano /etc/nginx/sites-available/odoo.conf
Copy and paste the following configuration, ensure that you change the server_name
directory to match your domain name.
upstream odooserver {
server 127.0.0.1:8069;
}
server {
listen 80;
server_name domainname.com;
access_log /var/log/nginx/odoo_access.log;
error_log /var/log/nginx/odoo_error.log;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_redirect off;
proxy_pass http://odooserver;
}
location ~* /web/static/ {
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
proxy_pass http://odooserver;
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
Hit Ctrl+X
followed by Y
and Enter
to save the file and exit.
To enable this newly created website configuration, symlink the file that you just created into the sites-enabled
directory.
sudo ln -s /etc/nginx/sites-available/odoo.conf /etc/nginx/sites-enabled/odoo.conf
Check your configuration and restart Nginx for the changes to take effect.
sudo nginx -t
sudo service nginx restart
HTTPS
HTTPS is a protocol for secure communication between a server (instance) and a client (web browser). Due to the introduction of Let’s Encrypt, which provides free SSL certificates, HTTPS are adopted by everyone and also provides trust to your audiences.
sudo apt install python3-certbot-nginx
Now we have installed Certbot by Let’s Encrypt for Ubuntu 20.04, run this command to receive your certificates.
sudo certbot --nginx certonly
Enter your email
and agree to the terms and conditions, then you will receive the list of domains you need to generate SSL certificate.
To select all domains simply hit Enter
The Certbot client will automatically generate the new certificate for your domain. Now we need to update the Nginx config.
Open your site’s Nginx configuration file add replace everything with the following. Replacing the file path with the one you received when obtaining the SSL certificate. The ssl_certificate directive
should point to your fullchain.pem
file, and the ssl_certificate_key
directive should point to your privkey.pem
file.
upstream odooserver {
server 127.0.0.1:8069;
}
server {
listen [::]:80;
listen 80;
server_name domainname.com www.domainname.com;
return 301 https://domainname.com$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name www.domainname.com;
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem;
return 301 https://domainname.com$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name domainname.com;
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem;
access_log /var/log/nginx/odoo_access.log;
error_log /var/log/nginx/odoo_error.log;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_redirect off;
proxy_pass http://odooserver;
}
location ~* /web/static/ {
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
proxy_pass http://odooserver;
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
Hit CTRL+X
followed by Y
to save the changes.
Check your configuration and restart Nginx for the changes to take effect.
sudo nginx -t
sudo service nginx restart
Certificates provided by Let’s Encrypt are valid for 90 days only, so you need to renew them often. Now you set up a cronjob to check for the certificate which is due to expire in next 30 days and renew it automatically.
sudo crontab -e
Add this line at the end of the file
0 0,12 * * * certbot renew >/dev/null 2>&1
Hit CTRL+X
followed by Y
to save the changes.
This cronjob will attempt to check for renewing the certificate twice daily.
Now you can visit your domain name on your web browser. You will see the page similar to the one below. Here you can create the database and admin user for your Odoo.
Fill in all appropriate values and click create database. Now Odoo will be ready to use.
Now you have learned how to install Odoo 15 on your Ubuntu server with Nginx reverse proxy on AWS and secure it with Let’s Encrypt.
Thanks for your time. If you face any problem or any feedback, please leave a comment below.
Original article source at: https://www.cloudbooklet.com/
1673452680
Note: Please consider using the official NGINX Ansible role from NGINX, Inc.
Installs Nginx on RedHat/CentOS, Debian/Ubuntu, Archlinux, FreeBSD or OpenBSD servers.
This role installs and configures the latest version of Nginx from the Nginx yum repository (on RedHat-based systems), apt (on Debian-based systems), pacman (Archlinux), pkgng (on FreeBSD systems) or pkg_add (on OpenBSD systems). You will likely need to do extra setup work after this role has installed Nginx, like adding your own [virtualhost].conf file inside /etc/nginx/conf.d/
, describing the location and options to use for your particular website.
None.
Available variables are listed below, along with default values (see defaults/main.yml
):
nginx_listen_ipv6: true
Whether or not to listen on IPv6 (applied to all vhosts managed by this role).
nginx_vhosts: []
A list of vhost definitions (server blocks) for Nginx virtual hosts. Each entry will create a separate config file named by server_name
. If left empty, you will need to supply your own virtual host configuration. See the commented example in defaults/main.yml
for available server options. If you have a large number of customizations required for your server definition(s), you're likely better off managing the vhost configuration file yourself, leaving this variable set to []
.
nginx_vhosts:
- listen: "443 ssl http2"
server_name: "example.com"
server_name_redirect: "www.example.com"
root: "/var/www/example.com"
index: "index.php index.html index.htm"
error_page: ""
access_log: ""
error_log: ""
state: "present"
template: "{{ nginx_vhost_template }}"
filename: "example.com.conf"
extra_parameters: |
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
An example of a fully-populated nginx_vhosts entry, using a |
to declare a block of syntax for the extra_parameters
.
Please take note of the indentation in the above block. The first line should be a normal 2-space indent. All other lines should be indented normally relative to that line. In the generated file, the entire block will be 4-space indented. This style will ensure the config file is indented correctly.
- listen: "80"
server_name: "example.com www.example.com"
return: "301 https://example.com$request_uri"
filename: "example.com.80.conf"
An example of a secondary vhost which will redirect to the one shown above.
Note: The filename
defaults to the first domain in server_name
, if you have two vhosts with the same domain, eg. a redirect, you need to manually set the filename
so the second one doesn't override the first one
nginx_remove_default_vhost: false
Whether to remove the 'default' virtualhost configuration supplied by Nginx. Useful if you want the base /
URL to be directed at one of your own virtual hosts configured in a separate .conf file.
nginx_upstreams: []
If you are configuring Nginx as a load balancer, you can define one or more upstream sets using this variable. In addition to defining at least one upstream, you would need to configure one of your server blocks to proxy requests through the defined upstream (e.g. proxy_pass http://myapp1;
). See the commented example in defaults/main.yml
for more information.
nginx_user: "nginx"
The user under which Nginx will run. Defaults to nginx
for RedHat, www-data
for Debian and www
on FreeBSD and OpenBSD.
nginx_worker_processes: "{{ ansible_processor_vcpus|default(ansible_processor_count) }}"
nginx_worker_connections: "1024"
nginx_multi_accept: "off"
nginx_worker_processes
should be set to the number of cores present on your machine (if the default is incorrect, find this number with grep processor /proc/cpuinfo | wc -l
). nginx_worker_connections
is the number of connections per process. Set this higher to handle more simultaneous connections (and remember that a connection will be used for as long as the keepalive timeout duration for every client!). You can set nginx_multi_accept
to on
if you want Nginx to accept all connections immediately.
nginx_error_log: "/var/log/nginx/error.log warn"
nginx_access_log: "/var/log/nginx/access.log main buffer=16k flush=2m"
Configuration of the default error and access logs. Set to off
to disable a log entirely.
nginx_sendfile: "on"
nginx_tcp_nopush: "on"
nginx_tcp_nodelay: "on"
TCP connection options. See this blog post for more information on these directives.
nginx_keepalive_timeout: "65"
nginx_keepalive_requests: "100"
Nginx keepalive settings. Timeout should be set higher (10s+) if you have more polling-style traffic (AJAX-powered sites especially), or lower (<10s) if you have a site where most users visit a few pages and don't send any further requests.
nginx_server_tokens: "on"
Nginx server_tokens settings. Controls whether nginx responds with it's version in HTTP headers. Set to "off"
to disable.
nginx_client_max_body_size: "64m"
This value determines the largest file upload possible, as uploads are passed through Nginx before hitting a backend like php-fpm
. If you get an error like client intended to send too large body
, it means this value is set too low.
nginx_server_names_hash_bucket_size: "64"
If you have many server names, or have very long server names, you might get an Nginx error on startup requiring this value to be increased.
nginx_proxy_cache_path: ""
Set as the proxy_cache_path
directive in the nginx.conf
file. By default, this will not be configured (if left as an empty string), but if you wish to use Nginx as a reverse proxy, you can set this to a valid value (e.g. "/var/cache/nginx keys_zone=cache:32m"
) to use Nginx's cache (further proxy configuration can be done in individual server configurations).
nginx_extra_http_options: ""
Extra lines to be inserted in the top-level http
block in nginx.conf
. The value should be defined literally (as you would insert it directly in the nginx.conf
, adhering to the Nginx configuration syntax - such as ;
for line termination, etc.), for example:
nginx_extra_http_options: |
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
See the template in templates/nginx.conf.j2
for more details on the placement.
nginx_extra_conf_options: ""
Extra lines to be inserted in the top of nginx.conf
. The value should be defined literally (as you would insert it directly in the nginx.conf
, adhering to the Nginx configuration syntax - such as ;
for line termination, etc.), for example:
nginx_extra_conf_options: |
worker_rlimit_nofile 8192;
See the template in templates/nginx.conf.j2
for more details on the placement.
nginx_log_format: |-
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'
Configures Nginx's log_format
. options.
nginx_default_release: ""
(For Debian/Ubuntu only) Allows you to set a different repository for the installation of Nginx. As an example, if you are running Debian's wheezy release, and want to get a newer version of Nginx, you can install the wheezy-backports
repository and set that value here, and Ansible will use that as the -t
option while installing Nginx.
nginx_ppa_use: false
nginx_ppa_version: stable
(For Ubuntu only) Allows you to use the official Nginx PPA instead of the system's package. You can set the version to stable
or development
.
nginx_yum_repo_enabled: true
(For RedHat/CentOS only) Set this to false
to disable the installation of the nginx
yum repository. This could be necessary if you want the default OS stable packages, or if you use Satellite.
nginx_service_state: started
nginx_service_enabled: yes
By default, this role will ensure Nginx is running and enabled at boot after Nginx is configured. You can use these variables to override this behavior if installing in a container or further control over the service state is required.
If you can't customize via variables because an option isn't exposed, you can override the template used to generate the virtualhost configuration files or the nginx.conf
file.
nginx_conf_template: "nginx.conf.j2"
nginx_vhost_template: "vhost.j2"
If necessary you can also set the template on a per vhost basis.
nginx_vhosts:
- listen: "80 default_server"
server_name: "site1.example.com"
root: "/var/www/site1.example.com"
index: "index.php index.html index.htm"
template: "{{ playbook_dir }}/templates/site1.example.com.vhost.j2"
- server_name: "site2.example.com"
root: "/var/www/site2.example.com"
index: "index.php index.html index.htm"
template: "{{ playbook_dir }}/templates/site2.example.com.vhost.j2"
You can either copy and modify the provided template, or extend it with Jinja2 template inheritance and override the specific template block you need to change.
Set the nginx_conf_template
to point to a template file in your playbook directory.
nginx_conf_template: "{{ playbook_dir }}/templates/nginx.conf.j2"
Create the child template in the path you configured above and extend geerlingguy.nginx
template file relative to your playbook.yml
.
{% extends 'roles/geerlingguy.nginx/templates/nginx.conf.j2' %}
{% block http_gzip %}
gzip on;
gzip_proxied any;
gzip_static on;
gzip_http_version 1.0;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss
application/xhtml+xml
application/x-font-ttf
application/x-font-opentype
image/svg+xml
image/x-icon;
gzip_buffers 16 8k;
gzip_min_length 512;
{% endblock %}
None.
- hosts: server
roles:
- { role: geerlingguy.nginx }
Author: Geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-nginx
License: MIT license
1672336800
Ansible-powered LEMP stack for WordPress
Ansible playbooks for setting up a LEMP stack for WordPress.
Trellis will configure a server with the following and more:
See the full installation docs for requirements.
Create a new project:
$ trellis new example.com
group_vars/development/wordpress_sites.yml
Start the Vagrant virtual machine:
$ trellis up
Read the local development docs for more information.
A base Ubuntu 18.04 (Bionic) or Ubuntu 20.04 (Focal LTS) server is required for setting up remote servers.
group_vars/<environment>/wordpress_sites.yml
and in group_vars/<environment>/vault.yml
(see the Vault docs for how to encrypt files containing passwords)hosts/<environment>
users
in group_vars/all/users.yml
(see the SSH Keys docs)Provision the server:
$ trellis provision production
Or take advantage of its Digital Ocean support to create a Droplet and provision it in a single command:
$ trellis droplet create production
Read the remote server docs for more information.
repo
(Git URL) of your Bedrock WordPress project in the corresponding group_vars/<environment>/wordpress_sites.yml
filebranch
you want to deploy (defaults to master
)Deploy a site:
$ trellis deploy <environment> <site>
Rollback a deploy:
$ trellis rollback <environment> <site>
Read the deploys docs for more information.
Assuming you're using the standard project structure, you just need to make the project trellis-cli compatible by initializing it:
$ trellis init
Keep track of development and community news.
Trellis is an open source project and completely free to use.
However, the amount of effort needed to maintain and develop new features and products within the Roots ecosystem is not sustainable without proper financial backing. If you have the capability, please consider sponsoring Roots.
Website
Documentation
Releases
Support
Author: Roots
Source Code: https://github.com/roots/trellis
License: MIT license
1672332269
This collection provides battle tested hardening for:
The hardening is intended to be compliant with the Inspec DevSec Baselines:
The roles are now part of the hardening-collection. We have kept the old releases of the os-hardening
role in this repository, so you can find the them by exploring older tags. The last release of the standalone role was 6.2.0.
The other roles are in separate archives repositories:
In progress, not working:
Install the collection via ansible-galaxy:
ansible-galaxy collection install devsec.hardening
Please refer to the examples in the readmes of the role.
See Ansible Using collections for more details.
See the contributor guideline.
See the changelog.
Todos:
General information:
Author: Dev-sec
Source Code: https://github.com/dev-sec/ansible-collection-hardening
License: Apache-2.0 license
1670646780
How to Install Nginx on CentOS 8 – Google Cloud or AWS. Nginx is high performance light-weight HTTP and reverse proxy web server capable of handling large websites. This guide explains how to install Nginx on CentOS 8.
This tutorial is tested on Google Compute Engine VM Instance running CentOS 8. This setup will also work on other cloud services like AWS, DigitalOcean, etc or any VPS or Dedicated servers.
If you are using Google Cloud then you can learn how to setup CentOS 8 on a Google Compute Engine.
Start by updating the packages to the latest available version using the following commands.
sudo yum update
sudo yum upgrade
Once you have all packages are up to date you can proceed to install Nginx.
Nginx is available in the default CentOS repositories. So it is easy to install Nginx with just a single command.
sudo yum install nginx
Once the installation is completed, you need to start and enable Nginx to start on server boot.
sudo systemctl enable nginx
sudo systemctl start nginx
To verify the status of the Nginx service you can use the status
command.
sudo systemctl status nginx● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-11-27 08:34:48 UTC; 1min 1s ago
Process: 823 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 821 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
Process: 820 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 825 (nginx)
This indicates Nginx is up and running on your CentOS server.
Upon installation Nginx creates predefined rules to accept connections on port 80 for HTTP and 443 for HTTPS
sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload
Now you can check your browser by pointing it to your public IP address (http://IP_Adddress
).
You will see the Nginx welcome page.
/etc/nginx
/etc/nginx/nginx.conf
/etc/nginx/conf.d
/var/www/html
/var/log/nginx/error.log
/var/log/nginx/access/log
Prepare yourself for a role working as an Information Technology Professional with Linux operating system
Now you have learned how to install Nginx on your CenOS 8 server on any hosting plaforms.
Thanks for your time. If you face any problem or any feedback, please leave a comment below.
Original article source at: https://www.cloudbooklet.com/
1670496485
Install Jira on Ubuntu 18.04 with Nginx, RDS and Let’s Encrypt SSL on AWS EC2 Instance. Jira is a software designed to help teams to plan, track, managing software developments easily.
In this guide we are going to learn how to install Jira and configure it with Nginx reverse proxy and secure it with Let’s Encrypt SSL.
You will also configure RDS and connect it with Jira.
SSH to your EC2 Instance and perform the steps listed below.
I assume you have configured your subdomain as here: jira.domain.com
Let’s start by updating the local package index with the following command to the latest available version.
sudo apt update
sudo apt upgrade
Once the update is done you can start installing the required packages.
You can download the latest version of Jira from the official Atlassian website. The version we are going to install is JIRA Core 8.6.1.
SSH to your server or instance and start downloading Jira.
wget https://www.atlassian.com/software/jira/downloads/binary/atlassian-jira-core-8.6.1-x64.bin
Once the file is downloaded, make the file executable.
sudo chmod a+x atlassian-jira-core-8.6.1-x64.bin
Now you can start the installation.
Run the installation file.
sudo ./atlassian-jira-core-8.6.1-x64.bin
Now you will get…
Output We couldn't find fontconfig, which is required to use OpenJDK. Press [y, Enter] to install it.
Enter y
followed by Enter
and start the installation.
When you are prompted to choose the Installation type, you can choose Custom Install by typing 2
and Enter
Install JIRA as Service? Type y
You can configure other steps with the default.
Finally you will get the overview of your installation.
Details on where JIRA Core will be installed and the settings that will be used.
Installation Directory: /opt/atlassian/jira
Home Directory: /var/atlassian/application-data/jira
HTTP Port: 8080
RMI Port: 8005
Install as service: Yes
Install [i, Enter], Exit [e]
Now enter i
to start the installation.
Once the installation is complete you can start Jira when prompted.
You will receive an output similar to the one below. This indicated Jira is installed successfully and running on port 8080
.
Output
Installation of JIRA Core 8.6.1 is complete
Your installation of JIRA Core 8.6.1 is now ready and can be accessed via
your browser.
JIRA Core 8.6.1 can be accessed at http://localhost:8080
Finishing installation …
Now you can configure Tomcat setting for Nginx reverse proxy.
Stop Jira.
sudo service jira stop
Edit your server.xml
and replace the connector and the context.
sudo nano /opt/atlassian/jira/conf/server.xml
Replace
<Context path="" docBase="${catalina.home}/atlassian-jira" reloadable="false" useHttpOnly="true">
with (add a /
symbol in the path)
<Context path="/" docBase="${catalina.home}/atlassian-jira" reloadable="false" useHttpOnly="true">
Scroll up to find the default connector and add the proxyName
, proxyPort
, scheme
and secure
. So the connector looks like the one below.
<Connector port="8080" relaxedPathChars="[]|" relaxedQueryChars="Special characters" maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false" maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443" acceptCount="100" disableUploadTimeout="true" proxyName="jira.domain.com" proxyPort="443" scheme="https" secure="true"/>
Hit CTRL + X
followed by Y
and ENTER
to save and exit the file.
Once the setup is done, you can start the Jira.
sudo service jira start
If you get any error while restarting, the catalina.pid
file hasn’t been deleted while stopping. So you need to delete the file manually and start Jira.
sudo su
sudo rm -rf /opt/atlassian/jira/work/catalina.pid
exit
sudo service jira start
Once the restart is successful you can install and configure Nginx.
sudo apt install nginx
Remove default Nginx configuration.
sudo rm -rf /etc/nginx/sites-available/default
sudo rm -rf /etc/nginx/sites-enabled/default
Create a new configuration file for Jira.
sudo nano /etc/nginx/sites-available/jira.conf
Add the following configurations.
server {
listen [::]:80;
listen 80;
server_name jira.domain.com;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080;
client_max_body_size 10M;
}
}
Hit Ctrl+X
followed by Y
and Enter
to save the file and exit.
To enable this newly created website configuration, symlink the file that you just created into the sites-enabled
directory.
sudo ln -s /etc/nginx/sites-available/jira.conf /etc/nginx/sites-enabled/jira.conf
Check your configuration and restart Nginx for the changes to take effect.
sudo nginx -t
sudo service nginx restart
HTTPS
HTTPS is a protocol for secure communication between a server (instance) and a client (web browser). Due to the introduction of Let’s Encrypt, which provides free SSL certificates, HTTPS are adopted by everyone and also provides trust to your audiences.
sudo add-apt-repository ppa:certbot/certbot
sudo apt update
sudo apt install python-certbot-nginx
Now we have installed Certbot by Let’s Encrypt for Ubuntu 18.04, run this command to receive your certificates.
sudo certbot --nginx certonly
Enter your email
and agree to the terms and conditions, then you will receive the list of domains you need to generate SSL certificate.
To select all domains simply hit Enter
The Certbot client will automatically generate the new certificate for your domain. Now we need to update the Nginx config.
Open your site’s Nginx configuration file add replace everything with the following. Replacing the file path with the one you received when obtaining the SSL certificate. The ssl_certificate directive
should point to your fullchain.pem file, and the ssl_certificate_key
directive should point to your privkey.pem file.
server {
listen [::]:80;
listen 80;
server_name jira.domain.com;
return 301 https://jira.domain.com$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name jira.domain.com;
ssl_certificate /etc/letsencrypt/live/jira.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/jira.domain.com/privkey.pem;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080;
client_max_body_size 10M;
}
}
Hit CTRL+X
followed by Y
to save the changes.
Check your configuration and restart Nginx for the changes to take effect.
sudo nginx -t
sudo service nginx restart
Certificates provided by Let’s Encrypt are valid for 90 days only, so you need to renew them often. Now you set up a cronjob to check for the certificate which is due to expire in next 30 days and renew it automatically.
sudo crontab -e
Add this line at the end of the file
0 0,12 * * * certbot renew >/dev/null 2>&1
Hit CTRL+X
followed by Y
to save the changes.
This cronjob will attempt to check for renewing the certificate twice daily.
Now you can point your browser to the Jira URL. You will see the Jira Setup page. Now you can configure it yourself.
You can follow the on screen details to setup Jira completely by using your RDS database credentials.
Now you have learned how to install Jira with Nginx reverse proxy and secure it with Let’s Encrypt SSL on Amazon Web Services.
Thanks for your time.
Original article source at: https://www.cloudbooklet.com/
1670396100
Install Odoo 13 on Ubuntu 18.04 with Nginx – AWS. In this tutorial you are going to learn how to install and setup Odoo with Nginx reverse proxy and connect it with PostgreSQL in Amazon RDS.
Odoo is a management self hosted software to run a business with a top notch user experience. The applications within Odoo are perfectly integrated with each other, allowing you to fully automate your business processes easily.
SSH to your EC2 Instance and perform the steps listed below.
Let’s start by updating the local package index with the following command to the latest available version.
sudo apt update
sudo apt upgrade
Once the update is done you can start installing the required packages.
Wkhtmltopdf is package that is used to render HTML to PDF and other image formats. If you are using Odoo to print PDF reports you should install wkhtmltopdf tool. The recommended version for Odoo is 0.12.5. This is not included in the default Ubuntu 18.04 repository.
So we shall download the package and install it.
wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox_0.12.5-1.bionic_amd64.debsudo apt install ./wkhtmltox_0.12.5-1.bionic_amd64.deb
Now you can install Odoo 13 by adding the repository to your Ubuntu server.
wget -O - https://nightly.odoo.com/odoo.key | sudo apt-key add -
echo "deb http://nightly.odoo.com/13.0/nightly/deb/ ./" | sudo tee /etc/apt/sources.list.d/odoo.list
Update the apt cache and proceed install Odoo13.
sudo apt update
sudo apt install odoo
Once the installation is complete Odoo is started automatically as a service.
To make sure Odoo is running you can check the status using the following command.
sudo service odoo statusOutput
● odoo.service - Odoo Open Source ERP and CRM
Loaded: loaded (/lib/systemd/system/odoo.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-02-14 09:29:38 UTC; 10min ago
Main PID: 8387 (odoo)
Tasks: 6 (limit: 4395)
CGroup: /system.slice/odoo.service
└─8387 /usr/bin/python3 /usr/bin/odoo --config /etc/odoo/odoo.conf --logfile /var/log/odoo/odoo-server.log
Feb 14 09:29:38 odoo systemd[1]: Started Odoo Open Source ERP and CRM.
This indicates Odoo is started and running successfully.
Now you can enable Odoo service to start on system boot.
sudo systemctl enable --now odoo
Now you can configure Odoo to use remote database like Cloud SQL or Amazon RDS.
sudo nano /etc/odoo/odoo.conf
Replace the highlighted values corresponding to your Cloud SQL values.
[options]
; This is the password that allows database operations:
; admin_passwd = admin
db_host = RDS_ENDPOINT
db_port = False
db_user = RDS_user
db_password = RDS_user_password
;addons_path = /usr/lib/python3/dist-packages/odoo/addons
Restart Odoo.
sudo service odoo restart
Install Nginx using the followng command.
sudo apt install nginx
Remove default Nginx configurations.
sudo rm /etc/nginx/sites-enabled/default
sudo rm /etc/nginx/sites-available/default
Create a new Nginx configuration for Odoo in the sites-available
directory.
sudo nano /etc/nginx/sites-available/odoo.conf
Copy and paste the following configuration, ensure that you change the server_name
directory to match your domain name.
upstream odooserver {
server 127.0.0.1:8069;
}
server {
listen 80;
server_name domainname.com;
access_log /var/log/nginx/odoo_access.log;
error_log /var/log/nginx/odoo_error.log;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_redirect off;
proxy_pass http://odooserver;
}
location ~* /web/static/ {
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
proxy_pass http://odooserver;
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
Hit Ctrl+X
followed by Y
and Enter
to save the file and exit.
To enable this newly created website configuration, symlink the file that you just created into the sites-enabled
directory.
sudo ln -s /etc/nginx/sites-available/odoo.conf /etc/nginx/sites-enabled/odoo.conf
Check your configuration and restart Nginx for the changes to take effect.
sudo nginx -t
sudo service nginx restart
HTTPS
HTTPS is a protocol for secure communication between a server (instance) and a client (web browser). Due to the introduction of Let’s Encrypt, which provides free SSL certificates, HTTPS are adopted by everyone and also provides trust to your audiences.
sudo add-apt-repository ppa:certbot/certbot
sudo apt update
sudo apt install python-certbot-nginx
Now we have installed Certbot by Let’s Encrypt for Ubuntu 18.04, run this command to receive your certificates.
sudo certbot --nginx certonly
Enter your email
and agree to the terms and conditions, then you will receive the list of domains you need to generate SSL certificate.
To select all domains simply hit Enter
The Certbot client will automatically generate the new certificate for your domain. Now we need to update the Nginx config.
Open your site’s Nginx configuration file add replace everything with the following. Replacing the file path with the one you received when obtaining the SSL certificate. The ssl_certificate directive
should point to your fullchain.pem
file, and the ssl_certificate_key
directive should point to your privkey.pem
file.
upstream odooserver {
server 127.0.0.1:8069;
}
server {
listen [::]:80;
listen 80;
server_name domainname.com www.domainname.com;
return 301 https://domainname.com$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name www.domainname.com;
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem;
return 301 https://domainname.com$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name domainname.com;
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem;
access_log /var/log/nginx/odoo_access.log;
error_log /var/log/nginx/odoo_error.log;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_redirect off;
proxy_pass http://odooserver;
}
location ~* /web/static/ {
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
proxy_pass http://odooserver;
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
Hit CTRL+X
followed by Y
to save the changes.
Check your configuration and restart Nginx for the changes to take effect.
sudo nginx -t
sudo service nginx restart
Certificates provided by Let’s Encrypt are valid for 90 days only, so you need to renew them often. Now you set up a cronjob to check for the certificate which is due to expire in next 30 days and renew it automatically.
sudo crontab -e
Add this line at the end of the file
0 0,12 * * * certbot renew >/dev/null 2>&1
Hit CTRL+X
followed by Y
to save the changes.
This cronjob will attempt to check for renewing the certificate twice daily.
Now you can visit your domain name on your web browser. You will see the page similar to the one below. Here you can create the database and admin user for your Odoo.
Fill in all appropriate values and click create database. Now Odoo will be ready to use.
Learn a complete walk through of the Odoo Sales Application with tips on using advanced configurations.
Now you have learned how to install Odoo 13 on your Ubuntu server with Nginx on AWS and secure it with Let’s Encrypt.
Thanks for your time. If you face any problem or any feedback, please leave a comment below.
Original article source at: https://www.cloudbooklet.com/
1670066280
Install Odoo 15 on Ubuntu 20.04 with Nginx – AWS. In this tutorial you are going to learn how to install and setup Odoo with Nginx reverse proxy and connect it with PostgreSQL in Amazon RDS.
Odoo is a management self hosted software to run a business with a top notch user experience. The applications within Odoo are perfectly integrated with each other, allowing you to fully automate your business processes easily.
SSH to your EC2 Instance and perform the steps listed below.
Let’s start by updating the local package index with the following command to the latest available version.
sudo apt update
sudo apt upgrade
Once the update is done you can start installing the required packages.
Wkhtmltopdf is package that is used to render HTML to PDF and other image formats. If you are using Odoo to print PDF reports you should install wkhtmltopdf tool. The recommended version for Odoo is 0.12.6. This is not included in the default Ubuntu 20.04 repository.
So we shall download the package and install it.
wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.focal_amd64.debsudo apt install ./wkhtmltox_0.12.6-1.focal_amd64.deb
The easiest way to install Odoo 15 is by adding the Nightly build repositories to your sources and proceed installation.
wget -O - https://nightly.odoo.com/odoo.key | apt-key add -
echo "deb http://nightly.odoo.com/15.0/nightly/deb/ ./" >> /etc/apt/sources.list
Update the apt cache and proceed install Odoo 15.
sudo apt update
sudo apt install odoo
Once the installation is complete Odoo is started automatically as a service.
To make sure Odoo is running you can check the status using the following command.
sudo service odoo statusOutput
● odoo.service - Odoo Open Source ERP and CRM
Loaded: loaded (/lib/systemd/system/odoo.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-01-03 06:31:14 UTC; 31s ago
Main PID: 9375 (odoo)
Tasks: 6 (limit: 4395)
CGroup: /system.slice/odoo.service
└─9375 /usr/bin/python3 /usr/bin/odoo --config /etc/odoo/odoo.conf --logfile /var/log/odoo/odoo-server.log
Jan 03 06:31:14 odoo systemd[1]: Started Odoo Open Source ERP and CRM.
This indicates Odoo is started and running successfully.
Now you can enable Odoo service to start on system boot.
sudo systemctl enable --now odoo
Now you can configure Odoo to use remote database like Cloud SQL or Amazon RDS.
sudo nano /etc/odoo/odoo.conf
Replace the highlighted values corresponding to your Cloud SQL values.
[options]
; This is the password that allows database operations:
; admin_passwd = admin
db_host = RDS_ENDPOINT
db_port = False
db_user = RDS_user
db_password = RDS_user_password
;addons_path = /usr/lib/python3/dist-packages/odoo/addons
Restart Odoo.
sudo service odoo restart
Advanced and easy tutorial to Install Odoo using Docker with Nginx and SSL
Install Nginx using the followng command.
sudo apt install nginx
Remove default Nginx configurations.
sudo rm /etc/nginx/sites-enabled/default
sudo rm /etc/nginx/sites-available/default
Create a new Nginx configuration for Odoo in the sites-available
directory.
sudo nano /etc/nginx/sites-available/odoo.conf
Copy and paste the following configuration, ensure that you change the server_name
directory to match your domain name.
upstream odooserver {
server 127.0.0.1:8069;
}
server {
listen 80;
server_name domainname.com;
access_log /var/log/nginx/odoo_access.log;
error_log /var/log/nginx/odoo_error.log;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_redirect off;
proxy_pass http://odooserver;
}
location ~* /web/static/ {
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
proxy_pass http://odooserver;
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
Hit Ctrl+X
followed by Y
and Enter
to save the file and exit.
To enable this newly created website configuration, symlink the file that you just created into the sites-enabled
directory.
sudo ln -s /etc/nginx/sites-available/odoo.conf /etc/nginx/sites-enabled/odoo.conf
Check your configuration and restart Nginx for the changes to take effect.
sudo nginx -t
sudo service nginx restart
HTTPS
HTTPS is a protocol for secure communication between a server (instance) and a client (web browser). Due to the introduction of Let’s Encrypt, which provides free SSL certificates, HTTPS are adopted by everyone and also provides trust to your audiences.
sudo apt install python3-certbot-nginx
Now we have installed Certbot by Let’s Encrypt for Ubuntu 20.04, run this command to receive your certificates.
sudo certbot --nginx certonly
Enter your email
and agree to the terms and conditions, then you will receive the list of domains you need to generate SSL certificate.
To select all domains simply hit Enter
The Certbot client will automatically generate the new certificate for your domain. Now we need to update the Nginx config.
Open your site’s Nginx configuration file add replace everything with the following. Replacing the file path with the one you received when obtaining the SSL certificate. The ssl_certificate directive
should point to your fullchain.pem
file, and the ssl_certificate_key
directive should point to your privkey.pem
file.
upstream odooserver {
server 127.0.0.1:8069;
}
server {
listen [::]:80;
listen 80;
server_name domainname.com www.domainname.com;
return 301 https://domainname.com$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name www.domainname.com;
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem;
return 301 https://domainname.com$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name domainname.com;
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem;
access_log /var/log/nginx/odoo_access.log;
error_log /var/log/nginx/odoo_error.log;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_redirect off;
proxy_pass http://odooserver;
}
location ~* /web/static/ {
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
proxy_pass http://odooserver;
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
Hit CTRL+X
followed by Y
to save the changes.
Check your configuration and restart Nginx for the changes to take effect.
sudo nginx -t
sudo service nginx restart
Certificates provided by Let’s Encrypt are valid for 90 days only, so you need to renew them often. Now you set up a cronjob to check for the certificate which is due to expire in next 30 days and renew it automatically.
sudo crontab -e
Add this line at the end of the file
0 0,12 * * * certbot renew >/dev/null 2>&1
Hit CTRL+X
followed by Y
to save the changes.
This cronjob will attempt to check for renewing the certificate twice daily.
Now you can visit your domain name on your web browser. You will see the page similar to the one below. Here you can create the database and admin user for your Odoo.
Fill in all appropriate values and click create database. Now Odoo will be ready to use.
Learn a complete walk through of the Odoo Sales Application with tips on using advanced configurations.
Now you have learned how to install Odoo 15 on your Ubuntu server with Nginx reverse proxy on AWS and secure it with Let’s Encrypt.
Thanks for your time. If you face any problem or any feedback, please leave a comment below.
Original article source at: https://www.cloudbooklet.com/
1670049600
Install Apache Tomcat 10 on Ubuntu 22.04 with Nginx. Apache Tomcat is an open source web server and a servlet container which is mainly used to server Java based applications.
In this guide you are going to learn how to install Apache Tomcat 10 on Ubuntu 22.04 and secure the setup with Nginx and Let’s Encrypt SSL.
Start by updating the server packages to the latest version available.
sudo apt update
sudo apt dist-upgrade -y
It would be better if Tomcat runs as it’s own unprivileged user. Execute the following command to create a new user with required privileges for Tomcat. This user wont be allowed to be logged in to SSH.
sudo useradd -m -d /opt/tomcat -U -s /bin/false tomcat
Install default JDK using the below command. Check here for more detailed guide to install Java.
sudo apt install default-jdk
Once the installation is completed, check the version using the following command.
java -version
Your output should be similar to the one below.
openjdk version "11.0.15" 2022-04-19
OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)
OpenJDK 64-Bit Server VM (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1, mixed mode, sharing)
Download the latest version of Tomcat from their official downloads page. Choose the tar.gz
under the core section.
Download the archive using wget.
cd ~/
wget https://dlcdn.apache.org/tomcat/tomcat-10/v10.0.21/bin/apache-tomcat-10.0.21.tar.gz
Extract the contents to /opt/tomcat
directory.
sudo tar xzvf apache-tomcat-10*tar.gz -C /opt/tomcat --strip-components=1
Configure correct permissions for Tomcat files.
sudo chown -R tomcat:tomcat /opt/tomcat/
sudo chmod -R u+x /opt/tomcat/bin
Now we need to setup users who can access the Host manager and the Manager pages in Tomcat.
Add the users with passwords in /opt/tomcat/conf/tomcat-users.xml
sudo nano /opt/tomcat/conf/tomcat-users.xml
Add the following lines before the end tag.
<role rolename="manager-gui" />
<user username="manager" password="secure_password" roles="manager-gui" />
<role rolename="admin-gui" />
<user username="admin" password="secure_password" roles="manager-gui,admin-gui" />
Now we have 2 users who can access the Manager and the Host manager pages.
Here we will configure a systemd service to manage Tomcat to start, stop and restart automatically.
Note the Java location.
sudo update-java-alternatives -lOutput
java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64
Create a systemd file.
sudo nano /etc/systemd/system/tomcat.service
Add the following contents to the file.
[Unit]
Description=Tomcat
After=network.target
[Service]
Type=forking
User=tomcat
Group=tomcat
Environment="JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64"
Environment="JAVA_OPTS=-Djava.security.egd=file:///dev/urandom"
Environment="CATALINA_BASE=/opt/tomcat"
Environment="CATALINA_HOME=/opt/tomcat"
Environment="CATALINA_PID=/opt/tomcat/temp/tomcat.pid"
Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC"
ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/opt/tomcat/bin/shutdown.sh
RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target
Replace JAVA_HOME variable with the one you noted before.
Reload systemd daemon for the changes to take effect.
sudo systemctl daemon-reload
Start Tomcat.
sudo systemctl start tomcat
Enable Tomcat to start at system boot.
sudo systemctl enable tomcat
Check Tomcat status.
sudo systemctl status tomcatOutput
● tomcat.service - Tomcat
Loaded: loaded (/etc/systemd/system/tomcat.service; disabled; vendor preset: enabled)
Active: active (running) since Wed 2022-05-25 06:41:36 UTC; 6s ago
Process: 5155 ExecStart=/opt/tomcat/bin/startup.sh (code=exited, status=0/SUCCESS)
Main PID: 5302 (java)
Tasks: 29 (limit: 1151)
Memory: 132.7M
CGroup: /system.slice/tomcat.service
Install Nginx using the following command.
sudo apt install nginx
Remove default configurations
sudo rm /etc/nginx/sites-available/default
sudo rm /etc/nginx/sites-enabled/default
Create new Nginx configuration
sudo nano /etc/nginx/sites-available/yourdomainname.conf
Paste the following
server {
listen [::]:80;
listen 80;
server_name yourdomainname.com www.yourdomainname.com;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save and exit the file.
Enable your configuration by creating a symbolic link.
sudo ln -s /etc/nginx/sites-available/yourdomainname.conf /etc/nginx/sites-enabled/yourdomainname.conf
Install Certbot package.
sudo apt install python3-certbot-nginx
Install SSL certificate using the below command.
sudo certbot --nginx --redirect --no-eff-email --agree-tos -m youremail@mail.com -d yourdomainname.com -d www.yourdomainname.com
If your domain is pointed to the server the free SSL certificate will get installed and HTTP to HTTPS redirection will get configured automatically and Nginx will get restarted by itself for the changes to take effect.
If you want to restart you can check your Nginx configuration and restart it.
sudo nginx -t
sudo service nginx restart
Prepare yourself for a role working as an Information Technology Professional with Linux operating system
Now check your domain in your browser.
Click Manager App, you will be prompted to enter username and password. Use the one we have configured in the Tomcat users section.
You can also take a look on the Host Manager page.
You can also view the server status.
Now you have learned how to install Apache Tomcat on Ubuntu 22.04 and secure with Nginx and Let’s Encrypt SSL..
Thanks for your time. If you face any problem or any feedback, please leave a comment below.
Original article source at: https://www.cloudbooklet.com/
1670019120
How to Install File Browser on Ubuntu 22.04 with Nginx
File Browser is a file managing application which provides a modern interface within a specified directory to upload, delete, preview, rename and edit your files. It also allows creation of multiple users and and assign each user their own directory. With this application you can eliminate the use of chroot setup which is used for managing files alone.
In this guide you are going to learn how to install File browser on Ubuntu 22.04 and configure it with Nginx reverse proxy. We will also create a configuration file to specify the root directory and a systemd
file to start file browser as a service using a specific user which Nginx uses www-data
.
Start by updating your server packages to the latest version available.
sudo apt update
sudo apt upgrade -y
Execute the below command to install File Browser.
curl -fsSL https://raw.githubusercontent.com/filebrowser/get/master/get.sh | bash
Once file browser is installed, you can start it using the filebrowser
command to check how it works.
filebrowser -r /path/to/your/files
By default file browser runs on port 8080. You can use your IP address with port 8080 to view the user interface in your browser.
Now we shall create a systemd configuration file so that you can use File Browser as a service.
Create a new file named filebrowser.service
sudo nano /etc/systemd/system/filebrowser.service
Add the following to the file.
[Unit]
Description=File browser: %I
After=network.target
[Service]
User=www-data
Group=www-data
ExecStart=/usr/local/bin/filebrowser -c /etc/filebrowser/default.json
[Install]
WantedBy=multi-user.target
Save and Exit the file.
Create a new directory mentioned in your systemd file for file browser.
sudo mkdir /etc/filebrowser
Create a new file named default.json
sudo nano /etc/filebrowser/default.json
Add the following to the file.
{
"port": 8080,
"baseURL": "",
"address": "",
"log": "stdout",
"database": "/etc/filebrowser/filebrowser.db",
"root": "/var/www/html/"
}
Save the file and exit.
The filebrowser.db
should be located in your home directory ~/
Move the file to the directory we created above.
sudo mv ~/filebrowser.db /etc/filebrowser
Configure correct permissions for database.
sudo chown -R www-data:www-data /etc/filebrowser/filebrowser.db
Now we have all configurations in place.
Enable file browser to start at system boot.
sudo systemctl enable filebrowser
Now you can start filebrowser.
sudo service filebrowser start
You can check the status using the below command.
sudo service filebrowser status● filebrowser.service - File browser:
Loaded: loaded (/etc/systemd/system/filebrowser.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-11-21 09:12:33 UTC; 4min 21s ago
Main PID: 462 (filebrowser)
Tasks: 7 (limit: 1149)
Memory: 14.0M
CPU: 34ms
CGroup: /system.slice/filebrowser.service
└─462 /usr/local/bin/filebrowser -c /etc/filebrowser/default.json
Nov 21 09:12:33 filemanager systemd[1]: Started File browser: .
Nov 21 09:12:38 filemanager filebrowser[462]: 2022/11/21 09:12:38 Using config file: /etc/filebrowser/default.>
Nov 21 09:12:38 filemanager filebrowser[462]: 2022/11/21 09:12:38 Listening on [::]:8080
Nov 21 09:13:47 filemanager filebrowser[462]: 2022/11/21 09:13:47 /api/renew: 401 127.0.0.1 <nil>
Install Nignx using the below command.
sudo apt install nginx
Remove default Nginx configuration.
sudo rm /etc/nginx/sites-available/default
sudo rm /etc/nginx/sites-enabled/default
Create new Nginx configuration
sudo nano /etc/nginx/sites-available/filebrowser.conf
We will use a subdomain to configure our file browser.
Paste the following
server {
listen [::]:80;
listen 80;
server_name sub.domain.com;
client_max_body_size 48M;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
You can update the client_max_body_size 48M
to whichever size you wish to have the upload size.
Save and exit the file.
Enable your configuration by creating a symbolic link
sudo ln -s /etc/nginx/sites-available/filebrowser.conf /etc/nginx/sites-enabled/filebrowser.conf
Check your Nginx configuration and restart Nginx
sudo nginx -t
sudo service nginx restart
Now you can visit your sub-domain name in browser, you should view the the login page.
These are the default login details.
user: admin
pass: admin
HTTPS
HTTPS is a protocol for secure communication between a server (instance) and a client (web browser). Due to the introduction of Let’s Encrypt, which provides free SSL certificates, HTTPS are adopted by everyone and also provides trust to your audiences.
sudo apt install python3-certbot-nginx
Now we have installed Certbot by Let’s Encrypt for Ubuntu 22.04, run this command to receive your certificates.
sudo certbot --nginx --redirect --no-eff-email --agree-tos -m youremail@domain.com -d sub.domain.com
Now you have learned how to install File Browser on Ubuntu 22.04 with Nginx reverse proxy.
Thanks for your time. If you face any problem or any feedback, please leave a comment below.
Original article source at: https://www.cloudbooklet.com/
1668591060
In this tutorial, we'll look at how to configure Flower to run behind Nginx with Docker. We'll also set up basic authentication.
Assuming you're using using Redis as your message broker, you're Docker Compose config will look similar to:
version: '3.7'
services:
redis:
image: redis
expose:
- 6379
worker:
build:
context: .
dockerfile: Dockerfile
command: ['celery', '-A', 'app.app', 'worker', '-l', 'info']
environment:
- BROKER_URL=redis://redis:6379
- RESULT_BACKEND=redis://redis:6379
depends_on:
- redis
flower:
image: mher/flower:0.9.7
command: ['flower', '--broker=redis://redis:6379', '--port=5555']
ports:
- 5557:5555
depends_on:
- redis
As of writing, the official Flower Docker image does not have a tag for versions > 0.9.7, which is why the
0.9.7
tag was used. See Docker tag for 1.0.0 release for more info.Want to check the version being used? Run
docker-compose exec flower pip freeze
.This tutorial was tested with Flower version 0.9.7 and Celery 5.2.7.
You can view this sample code in the celery-flower-docker repo on GitHub.
app.py:
import os
from celery import Celery
os.environ.setdefault('CELERY_CONFIG_MODULE', 'celery_config')
app = Celery('app')
app.config_from_envvar('CELERY_CONFIG_MODULE')
@app.task
def add(x, y):
return x + y
celery_config.py:
from os import environ
broker_url = environ['BROKER_URL']
result_backend = environ['RESULT_BACKEND']
Dockerfile:
FROM python:3.10
WORKDIR /usr/src/app
RUN pip install celery[redis]==5.2.7
COPY app.py .
COPY celery_config.py .
Quick test:
$ docker-compose build
$ docker-compose up -d --build
$ docker-compose exec worker python
>> from app import add
>> task = add.delay(5, 5)
>>>
>>> task.status
'SUCCESS'
>>> task.result
10
You should be able to view the dashboard at http://localhost:5557/.
To run flower behind Nginx, first add Nginx to the Docker Compose config:
version: '3.7'
services:
redis:
image: redis
expose:
- 6379
worker:
build:
context: .
dockerfile: Dockerfile
command: ['celery', '-A', 'app.app', 'worker', '-l', 'info']
environment:
- BROKER_URL=redis://redis:6379
- RESULT_BACKEND=redis://redis:6379
depends_on:
- redis
flower:
image: mher/flower:0.9.7
command: ['flower', '--broker=redis://redis:6379', '--port=5555']
expose: # new
- 5555
depends_on:
- redis
# new
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
depends_on:
- flower
nginx.conf:
events {}
http {
server {
listen 80;
# server_name your.server.url;
location / {
proxy_pass http://flower:5555;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Quick test:
$ docker-compose down
$ docker-compose build
$ docker-compose up -d --build
$ docker-compose exec worker python
>> from app import add
>> task = add.delay(7, 7)
>>>
>>> task.status
'SUCCESS'
>>> task.result
14
This time the dashboard should be viewable at http://localhost/.
To add basic authentication, first create a htpasswd file. For example:
$ htpasswd -c htpasswd michael
Next, add another volume to the nginx
service to mount the htpasswd from the host to /etc/nginx/.htpasswd in the container:
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./htpasswd:/etc/nginx/.htpasswd # new
ports:
- 80:80
depends_on:
- flower
Finally, to protect the "/" route, add auth_basic and auth_basic_user_file directives to the location block:
events {}
http {
server {
listen 80;
# server_name your.server.url;
location / {
proxy_pass http://flower:5555;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
}
For more on setting up basic authentication with Nginx, review the Restricting Access with HTTP Basic Authentication guide.
Final test:
$ docker-compose down
$ docker-compose build
$ docker-compose up -d --build
$ docker-compose exec worker python
>> from app import add
>> task = add.delay(9, 9)
>>>
>>> task.status
'SUCCESS'
>>> task.result
18
Navigate to http://localhost/ again. This time you should be prompted to enter your username and password.
Original article source at: https://testdriven.io
1667919600
Note: Please consider using the official NGINX Ansible role from NGINX, Inc.
Installs Nginx on RedHat/CentOS, Debian/Ubuntu, Archlinux, FreeBSD or OpenBSD servers.
This role installs and configures the latest version of Nginx from the Nginx yum repository (on RedHat-based systems), apt (on Debian-based systems), pacman (Archlinux), pkgng (on FreeBSD systems) or pkg_add (on OpenBSD systems). You will likely need to do extra setup work after this role has installed Nginx, like adding your own [virtualhost].conf file inside /etc/nginx/conf.d/
, describing the location and options to use for your particular website.
None.
Available variables are listed below, along with default values (see defaults/main.yml
):
nginx_listen_ipv6: true
Whether or not to listen on IPv6 (applied to all vhosts managed by this role).
nginx_vhosts: []
A list of vhost definitions (server blocks) for Nginx virtual hosts. Each entry will create a separate config file named by server_name
. If left empty, you will need to supply your own virtual host configuration. See the commented example in defaults/main.yml
for available server options. If you have a large number of customizations required for your server definition(s), you're likely better off managing the vhost configuration file yourself, leaving this variable set to []
.
nginx_vhosts:
- listen: "443 ssl http2"
server_name: "example.com"
server_name_redirect: "www.example.com"
root: "/var/www/example.com"
index: "index.php index.html index.htm"
error_page: ""
access_log: ""
error_log: ""
state: "present"
template: "{{ nginx_vhost_template }}"
filename: "example.com.conf"
extra_parameters: |
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
An example of a fully-populated nginx_vhosts entry, using a |
to declare a block of syntax for the extra_parameters
.
Please take note of the indentation in the above block. The first line should be a normal 2-space indent. All other lines should be indented normally relative to that line. In the generated file, the entire block will be 4-space indented. This style will ensure the config file is indented correctly.
- listen: "80"
server_name: "example.com www.example.com"
return: "301 https://example.com$request_uri"
filename: "example.com.80.conf"
An example of a secondary vhost which will redirect to the one shown above.
Note: The filename
defaults to the first domain in server_name
, if you have two vhosts with the same domain, eg. a redirect, you need to manually set the filename
so the second one doesn't override the first one
nginx_remove_default_vhost: false
Whether to remove the 'default' virtualhost configuration supplied by Nginx. Useful if you want the base /
URL to be directed at one of your own virtual hosts configured in a separate .conf file.
nginx_upstreams: []
If you are configuring Nginx as a load balancer, you can define one or more upstream sets using this variable. In addition to defining at least one upstream, you would need to configure one of your server blocks to proxy requests through the defined upstream (e.g. proxy_pass http://myapp1;
). See the commented example in defaults/main.yml
for more information.
nginx_user: "nginx"
The user under which Nginx will run. Defaults to nginx
for RedHat, www-data
for Debian and www
on FreeBSD and OpenBSD.
nginx_worker_processes: "{{ ansible_processor_vcpus|default(ansible_processor_count) }}"
nginx_worker_connections: "1024"
nginx_multi_accept: "off"
nginx_worker_processes
should be set to the number of cores present on your machine (if the default is incorrect, find this number with grep processor /proc/cpuinfo | wc -l
). nginx_worker_connections
is the number of connections per process. Set this higher to handle more simultaneous connections (and remember that a connection will be used for as long as the keepalive timeout duration for every client!). You can set nginx_multi_accept
to on
if you want Nginx to accept all connections immediately.
nginx_error_log: "/var/log/nginx/error.log warn"
nginx_access_log: "/var/log/nginx/access.log main buffer=16k flush=2m"
Configuration of the default error and access logs. Set to off
to disable a log entirely.
nginx_sendfile: "on"
nginx_tcp_nopush: "on"
nginx_tcp_nodelay: "on"
TCP connection options. See this blog post for more information on these directives.
nginx_keepalive_timeout: "65"
nginx_keepalive_requests: "100"
Nginx keepalive settings. Timeout should be set higher (10s+) if you have more polling-style traffic (AJAX-powered sites especially), or lower (<10s) if you have a site where most users visit a few pages and don't send any further requests.
nginx_server_tokens: "on"
Nginx server_tokens settings. Controls whether nginx responds with it's version in HTTP headers. Set to "off"
to disable.
nginx_client_max_body_size: "64m"
This value determines the largest file upload possible, as uploads are passed through Nginx before hitting a backend like php-fpm
. If you get an error like client intended to send too large body
, it means this value is set too low.
nginx_server_names_hash_bucket_size: "64"
If you have many server names, or have very long server names, you might get an Nginx error on startup requiring this value to be increased.
nginx_proxy_cache_path: ""
Set as the proxy_cache_path
directive in the nginx.conf
file. By default, this will not be configured (if left as an empty string), but if you wish to use Nginx as a reverse proxy, you can set this to a valid value (e.g. "/var/cache/nginx keys_zone=cache:32m"
) to use Nginx's cache (further proxy configuration can be done in individual server configurations).
nginx_extra_http_options: ""
Extra lines to be inserted in the top-level http
block in nginx.conf
. The value should be defined literally (as you would insert it directly in the nginx.conf
, adhering to the Nginx configuration syntax - such as ;
for line termination, etc.), for example:
nginx_extra_http_options: |
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
See the template in templates/nginx.conf.j2
for more details on the placement.
nginx_extra_conf_options: ""
Extra lines to be inserted in the top of nginx.conf
. The value should be defined literally (as you would insert it directly in the nginx.conf
, adhering to the Nginx configuration syntax - such as ;
for line termination, etc.), for example:
nginx_extra_conf_options: |
worker_rlimit_nofile 8192;
See the template in templates/nginx.conf.j2
for more details on the placement.
nginx_log_format: |-
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'
Configures Nginx's log_format
. options.
nginx_default_release: ""
(For Debian/Ubuntu only) Allows you to set a different repository for the installation of Nginx. As an example, if you are running Debian's wheezy release, and want to get a newer version of Nginx, you can install the wheezy-backports
repository and set that value here, and Ansible will use that as the -t
option while installing Nginx.
nginx_ppa_use: false
nginx_ppa_version: stable
(For Ubuntu only) Allows you to use the official Nginx PPA instead of the system's package. You can set the version to stable
or development
.
nginx_yum_repo_enabled: true
(For RedHat/CentOS only) Set this to false
to disable the installation of the nginx
yum repository. This could be necessary if you want the default OS stable packages, or if you use Satellite.
nginx_service_state: started
nginx_service_enabled: yes
By default, this role will ensure Nginx is running and enabled at boot after Nginx is configured. You can use these variables to override this behavior if installing in a container or further control over the service state is required.
If you can't customize via variables because an option isn't exposed, you can override the template used to generate the virtualhost configuration files or the nginx.conf
file.
nginx_conf_template: "nginx.conf.j2"
nginx_vhost_template: "vhost.j2"
If necessary you can also set the template on a per vhost basis.
nginx_vhosts:
- listen: "80 default_server"
server_name: "site1.example.com"
root: "/var/www/site1.example.com"
index: "index.php index.html index.htm"
template: "{{ playbook_dir }}/templates/site1.example.com.vhost.j2"
- server_name: "site2.example.com"
root: "/var/www/site2.example.com"
index: "index.php index.html index.htm"
template: "{{ playbook_dir }}/templates/site2.example.com.vhost.j2"
You can either copy and modify the provided template, or extend it with Jinja2 template inheritance and override the specific template block you need to change.
Set the nginx_conf_template
to point to a template file in your playbook directory.
nginx_conf_template: "{{ playbook_dir }}/templates/nginx.conf.j2"
Create the child template in the path you configured above and extend geerlingguy.nginx
template file relative to your playbook.yml
.
{% extends 'roles/geerlingguy.nginx/templates/nginx.conf.j2' %}
{% block http_gzip %}
gzip on;
gzip_proxied any;
gzip_static on;
gzip_http_version 1.0;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss
application/xhtml+xml
application/x-font-ttf
application/x-font-opentype
image/svg+xml
image/x-icon;
gzip_buffers 16 8k;
gzip_min_length 512;
{% endblock %}
None.
- hosts: server
roles:
- { role: geerlingguy.nginx }
MIT / BSD
This role was created in 2014 by Jeff Geerling, author of Ansible for DevOps.
Author: geerlingguy
Source Code: https://github.com/geerlingguy/ansible-role-nginx
License: MIT license
1667677260
Apache APISIX is a dynamic, real-time, high-performance API Gateway.
APISIX API Gateway provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.
You can use APISIX API Gateway to handle traditional north-south traffic, as well as east-west traffic between services. It can also be used as a k8s ingress controller.
The technical architecture of Apache APISIX:
#ApacheAPISIX
You can use APISIX API Gateway as a traffic entrance to process all business data, including dynamic routing, dynamic upstream, dynamic certificates, A/B testing, canary release, blue-green deployment, limit rate, defense against malicious attacks, metrics, monitoring alarms, service observability, service governance, etc.
All platforms
Multi protocols
client_id
, both support MQTT 3.1.*, 5.0.Full Dynamic
host
, uri
, schema
, method
, headers
of the request before send to upstream.Fine-grained routing
cookie
, args
, etc. as routing conditions to implement canary release, A/B testing, etc.{"arg_age", ">", 24}
Security
Double Submit Cookie
way, protect your API from CSRF attacks.OPS friendly
allow_admin
field in conf/config.yaml
to specify a list of IPs that are allowed to call the Admin API. Also, note that the Admin API uses key auth to verify the identity of the caller. The admin_key
field in conf/config.yaml
needs to be modified before deployment to ensure security.Highly scalable
rewrite
, access
, header filter
, body filter
and log
, also allows to hook the balancer
stage.balancer
phase.Multi-Language support
RPC
and Wasm
. Serverless
Installation
Please refer to install documentation.
Getting started
The getting started guide is a great way to learn the basics of APISIX. Just follow the steps in Getting Started.
Further, you can follow the documentation to try more plugins.
Admin API
Apache APISIX provides REST Admin API to dynamically control the Apache APISIX cluster.
Plugin development
You can refer to plugin development guide, and sample plugin example-plugin
's code implementation. Reading plugin concept would help you learn more about the plugin.
For more documents, please refer to Apache APISIX Documentation site
Using AWS's eight-core server, APISIX's QPS reaches 140,000 with a latency of only 0.2 ms.
Benchmark script has been open sourced, welcome to try and contribute.
APISIX also works perfectly in AWS graviton3 C7g.
visit here to generate Contributor Over Time.
A wide variety of companies and organizations use APISIX API Gateway for research, production and commercial product, below are some of them:
APISIX enriches the CNCF API Gateway Landscape.
Inspired by Kong and Orange.
Author: Apache
Source Code: https://github.com/apache/apisix
License: Apache-2.0 license
1667637240
Kong or Kong API Gateway is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins.
By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease.
Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.
Let’s test drive Kong by adding authentication to an API in under 5 minutes.
We suggest using the docker-compose distribution via the instructions below, but there is also a docker installation procedure if you’d prefer to run the Kong API Gateway in DB-less mode.
Whether you’re running in the cloud, on bare metal, or using containers, you can find every supported distribution on our official installation page.
To start, clone the Docker repository and navigate to the compose folder.
$ git clone https://github.com/Kong/docker-kong
$ cd compose/
Start the Gateway stack using:
$ KONG_DATABASE=postgres docker-compose --profile database up
The Gateway will be available on the following ports on localhost:
:8000
on which Kong listens for incoming HTTP traffic from your clients, and forwards it to your upstream services. :8001
on which the Admin API used to configure Kong listens.
Next, follow the quick start guide to tour the Gateway features.
By centralizing common API functionality across all your organization's services, the Kong API Gateway creates more freedom for engineering teams to focus on the challenges that matter most.
The top Kong features include:
Plugins provide advanced functionality that extends the use of the Gateway. Many of the Kong Inc. and community-developed plugins like AWS Lambda, Correlation ID, and Response Transformer are showcased at the Plugin Hub.
Contribute to the Plugin Hub and ensure your next innovative idea is published and available to the broader community!
We ❤️ pull requests, and we’re continually working hard to make it as easy as possible for developers to contribute. Before beginning development with the Kong API Gateway, please familiarize yourself with the following developer resources:
Use the Plugin Development Guide for building new and creative plugins, or browse the online version of Kong's source code documentation in the Plugin Development Kit (PDK) Reference. Developers can build plugins in Lua, Go or JavaScript.
Please see the Changelog for more details about a given release. The SemVer Specification is followed when versioning Gateway releases.
Kong Inc. offers commercial subscriptions that enhance the Kong API Gateway in a variety of ways. Customers of Kong's Konnect Cloud subscription take advantage of additional gateway functionality, commercial support, and access to Kong's managed (SaaS) control plane platform. The Konnect Cloud platform features include real-time analytics, a service catalog, developer portals, and so much more! Get started with Konnect Cloud.
$ KONG_DATABASE=postgres docker-compose --profile database up
$ git clone https://github.com/Kong/docker-kong
$ cd compose/
Installation | Documentation | Discussions | Forum | Blog | Builds
Author: Kong
Source Code: https://github.com/Kong/kong
License: Apache-2.0 license
1667552940
Ansible NGINX Role
This role installs NGINX Open Source, NGINX Plus, or the NGINX Amplify agent on your target host.
Note: This role is still in active development. There may be unidentified issues and the role variables may change as development continues.
If you wish to install NGINX Plus using this role, you will need to obtain an NGINX Plus license beforehand. You do not need to do anything beforehand if you want to install NGINX OSS.
This role is developed and tested with maintained versions of Ansible core (above 2.12
).
When using Ansible core, you will also need to install the following collections:
---
collections:
- name: ansible.posix
version: 1.4.0
- name: community.general
version: 5.5.0
- name: community.crypto # Only required if you plan to install NGINX Plus
version: 2.5.0
- name: community.docker # Only required if you plan to use Molecule (see below)
version: 3.1.0
Note: You can alternatively install the Ansible community distribution (what is known as the "old" Ansible) if you don't want to manage individual collections.
You will need to run this role as a root user using Ansible's become
parameter. Make sure you have set up the appropriate permissions on your target hosts.
Instructions on how to install Ansible can be found in the Ansible website.
2.11
.3.3
.files/license
folder.You can alternatively add your NGINX Plus repository certificate and key to the local environment. Run the following commands to export these files as base64-encoded variables and execute the Molecule tests:
export NGINX_CRT=$( cat <path to your certificate file> | base64 )
export NGINX_KEY=$( cat <path to your key file> | base64 )
molecule test -s plus
Use ansible-galaxy install nginxinc.nginx
to install the latest stable release of the role on your system. Alternatively, if you have already installed the role, use ansible-galaxy install -f nginxinc.nginx
to update the role to the latest release.
Use git clone https://github.com/nginxinc/ansible-role-nginx.git
to pull the latest edge commit of the role from GitHub.
The NGINX Ansible role supports all platforms supported by NGINX Open Source, NGINX Plus, and the NGINX Amplify agent:
Alpine:
- 3.13
- 3.14
- 3.15
- 3.16
Amazon Linux:
- 2
CentOS:
- 7.4+
Debian:
- buster (10)
- bullseye (11)
Red Hat:
- 7.4+
- 8
- 9
SUSE/SLES:
- 12
- 15
Ubuntu:
- bionic (18.04)
- focal (20.04)
- impish (21.10)
- jammy (22.04)
Alpine:
- 3.13
- 3.14
- 3.15
- 3.16
Amazon Linux 2:
- any
CentOS:
- 7.4+
Debian:
- buster (10)
- bullseye (11)
FreeBSD:
- 12.1+
- 13
Oracle Linux:
- 7.4+
Red Hat:
- 7.4+
- 8
- 9
SUSE/SLES:
- 12
- 15
Ubuntu:
- bionic (18.04)
- focal (20.04)
- jammy (22.04)
Amazon Linux 2:
- any
Debian:
- buster (10)
- bullseye (11)
Red Hat:
- 8
Ubuntu:
- bionic
- focal
Note: You can also use this role to compile NGINX Open Source from source, install NGINX Open Source on compatible yet unsupported platforms, or install NGINX Open Source on BSD systems at your own risk.
This role has multiple variables. The descriptions and defaults for all these variables can be found in the defaults/main/
folder in the following files:
Name | Description |
---|---|
main.yml | NGINX installation variables |
amplify.yml | NGINX Amplify agent installation variables |
bsd.yml | BSD installation variables |
logrotate.yml | Logrotate configuration variables |
selinux.yml | SELinux configuration variables |
systemd.yml | Systemd configuration variables |
Similarly, descriptions and defaults for preset variables can be found in the vars/
folder in the following files:
Name | Description |
---|---|
main.yml | List of supported NGINX platforms, modules, and Linux installation variables |
Working functional playbook examples can be found in the molecule/
folder in the following files:
Name | Description |
---|---|
default/converge.yml | Install a specific version of NGINX and set up logrotate |
downgrade/converge.yml | Downgrade to a specific version of NGINX |
downgrade_plus/converge.yml | Downgrade to a specific version of NGINX Plus |
module/converge.yml | Install various NGINX supported modules |
plus/converge.yml | Install NGINX Plus and various NGINX Plus supported modules |
source/converge.yml | Install NGINX from source |
uninstall/converge.yml | Uninstall NGINX |
uninstall_plus/converge.yml | Uninstall NGINX Plus |
upgrade/converge.yml | Upgrade NGINX |
upgrade_plus/converge.yml | Upgrade NGINX Plus |
Do note that if you install this repository via Ansible Galaxy, you will have to replace the role variable in the sample playbooks from ansible-role-nginx
to nginxinc.nginx
.
You can find the Ansible NGINX Core collection of roles to install and configure NGINX Open Source, NGINX Plus, and NGINX App Protect here.
You can find the Ansible NGINX configuration role to configure NGINX here.
You can find the Ansible NGINX App Protect role to install and configure NGINX App Protect WAF and NGINX App Protect DoS here.
You can find the Ansible NGINX Unit role to install NGINX Unit here.
Author: nginxinc
Source Code: https://github.com/nginxinc/ansible-role-nginx
License: Apache-2.0 license