Goproxy: A Global Proxy for Go Modules

GOPROXY  

A global proxy for go modules. see: https://goproxy.io

Requirements

It invokes the local go command to answer requests.
The default cacheDir is GOPATH, you can set it up by yourself according to the situation.

Build

git clone https://github.com/goproxyio/goproxy.git
cd goproxy
make

Started

Proxy mode

./bin/goproxy -listen=0.0.0.0:80 -cacheDir=/tmp/test

If you run go get -v pkg in the proxy machine, should set a new GOPATH which is different from the old GOPATH, or mayebe deadlock. See the file test/get_test.sh.

Router mode

./bin/goproxy -listen=0.0.0.0:80 -proxy https://goproxy.io

Use the -proxy flag switch to "Router mode", which implements route filter to routing private module or public module .

                                         direct
                      +----------------------------------> private repo
                      |
                 match|pattern
                      |
                  +---+---+           +----------+
go get  +-------> |goproxy| +-------> |goproxy.io| +---> golang.org/x/net
                  +-------+           +----------+
                 router mode           proxy mode

In Router mode, use the -exclude flag set pattern , direct to the repo which match the module path, pattern are matched to the full path specified, not only to the host component.

./bin/goproxy -listen=0.0.0.0:80 -cacheDir=/tmp/test -proxy https://goproxy.io -exclude "*.corp.example.com,rsc.io/private"

Use docker image

docker run -d -p80:8081 goproxy/goproxy

Use the -v flag to persisting the proxy module data (change cacheDir to your own dir):

docker run -d -p80:8081 -v cacheDir:/go goproxy/goproxy

Docker Compose

docker-compose up

Appendix

  1. set export GOPROXY=http://localhost to enable your goproxy.
  2. set export GOPROXY=direct to disable it.

Download Details:

Author: Goproxyio
Source Code: https://github.com/goproxyio/goproxy 
License: MIT license

#go #golang #module #proxy 

Goproxy: A Global Proxy for Go Modules
Elian  Harber

Elian Harber

1664938222

A Fast Reverse Proxy to Help You Expose A Local Server Behind A NAT

FRP

A fast reverse proxy to help you expose a local server behind a NAT or firewall to the internet.

What is frp?

frp is a fast reverse proxy to help you expose a local server behind a NAT or firewall to the Internet. As of now, it supports TCP and UDP, as well as HTTP and HTTPS protocols, where requests can be forwarded to internal services by domain name.

frp also has a P2P connect mode.

Development Status

frp is under development. Try the latest release version in the master branch, or use the dev branch for the version in development.

We are working on v2 version and trying to do some code refactor and improvements. It won't be compatible with v1.

We will switch v0 to v1 at the right time and only accept bug fixes and improvements instead of big feature requirements.

Architecture

architecture

Example Usage

Firstly, download the latest programs from Release page according to your operating system and architecture.

Put frps and frps.ini onto your server A with public IP.

Put frpc and frpc.ini onto your server B in LAN (that can't be connected from public Internet).

Access your computer in LAN by SSH

Modify frps.ini on server A and set the bind_port to be connected to frp clients:

# frps.ini
[common]
bind_port = 7000

Start frps on server A:

./frps -c ./frps.ini

On server B, modify frpc.ini to put in your frps server public IP as server_addr field:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[ssh]
type = tcp
local_ip = 127.0.0.1
local_port = 22
remote_port = 6000

Note that local_port (listened on client) and remote_port (exposed on server) are for traffic goes in/out the frp system, whereas server_port is used between frps.

Start frpc on server B:

./frpc -c ./frpc.ini

From another machine, SSH to server B like this (assuming that username is test):

ssh -oPort=6000 test@x.x.x.x

Visit your web service in LAN by custom domains

Sometimes we want to expose a local web service behind a NAT network to others for testing with your own domain name and unfortunately we can't resolve a domain name to a local IP.

However, we can expose an HTTP(S) service using frp.

Modify frps.ini, set the vhost HTTP port to 8080:

# frps.ini
[common]
bind_port = 7000
vhost_http_port = 8080

Start frps:

./frps -c ./frps.ini

Modify frpc.ini and set server_addr to the IP address of the remote frps server. The local_port is the port of your web service:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[web]
type = http
local_port = 80
custom_domains = www.example.com

Start frpc:

./frpc -c ./frpc.ini

Resolve A record of www.example.com to the public IP of the remote frps server or CNAME record to your origin domain.

Now visit your local web service using url http://www.example.com:8080.

Forward DNS query request

Modify frps.ini:

# frps.ini
[common]
bind_port = 7000

Start frps:

./frps -c ./frps.ini

Modify frpc.ini and set server_addr to the IP address of the remote frps server, forward DNS query request to Google Public DNS server 8.8.8.8:53:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[dns]
type = udp
local_ip = 8.8.8.8
local_port = 53
remote_port = 6000

Start frpc:

./frpc -c ./frpc.ini

Test DNS resolution using dig command:

dig @x.x.x.x -p 6000 www.google.com

Forward Unix domain socket

Expose a Unix domain socket (e.g. the Docker daemon socket) as TCP.

Configure frps same as above.

Start frpc with configuration:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[unix_domain_socket]
type = tcp
remote_port = 6000
plugin = unix_domain_socket
plugin_unix_path = /var/run/docker.sock

Test: Get Docker version using curl:

curl http://x.x.x.x:6000/version

Expose a simple HTTP file server

Browser your files stored in the LAN, from public Internet.

Configure frps same as above.

Start frpc with configuration:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[test_static_file]
type = tcp
remote_port = 6000
plugin = static_file
plugin_local_path = /tmp/files
plugin_strip_prefix = static
plugin_http_user = abc
plugin_http_passwd = abc

Visit http://x.x.x.x:6000/static/ from your browser and specify correct user and password to view files in /tmp/files on the frpc machine.

Enable HTTPS for local HTTP(S) service

You may substitute https2https for the plugin, and point the plugin_local_addr to a HTTPS endpoint.

Start frpc with configuration:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[test_https2http]
type = https
custom_domains = test.example.com

plugin = https2http
plugin_local_addr = 127.0.0.1:80
plugin_crt_path = ./server.crt
plugin_key_path = ./server.key
plugin_host_header_rewrite = 127.0.0.1
plugin_header_X-From-Where = frp

Visit https://test.example.com.

Expose your service privately

Some services will be at risk if exposed directly to the public network. With STCP (secret TCP) mode, a preshared key is needed to access the service from another client.

Configure frps same as above.

Start frpc on machine B with the following config. This example is for exposing the SSH service (port 22), and note the sk field for the preshared key, and that the remote_port field is removed here:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[secret_ssh]
type = stcp
sk = abcdefg
local_ip = 127.0.0.1
local_port = 22

Start another frpc (typically on another machine C) with the following config to access the SSH service with a security key (sk field):

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[secret_ssh_visitor]
type = stcp
role = visitor
server_name = secret_ssh
sk = abcdefg
bind_addr = 127.0.0.1
bind_port = 6000

On machine C, connect to SSH on machine B, using this command:

ssh -oPort=6000 127.0.0.1

P2P Mode

xtcp is designed for transmitting large amounts of data directly between clients. A frps server is still needed, as P2P here only refers the actual data transmission.

Note it can't penetrate all types of NAT devices. You might want to fallback to stcp if xtcp doesn't work.

In frps.ini configure a UDP port for xtcp:

# frps.ini
bind_udp_port = 7001

Start frpc on machine B, expose the SSH port. Note that remote_port field is removed:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[p2p_ssh]
type = xtcp
sk = abcdefg
local_ip = 127.0.0.1
local_port = 22

Start another frpc (typically on another machine C) with the config to connect to SSH using P2P mode:

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[p2p_ssh_visitor]
type = xtcp
role = visitor
server_name = p2p_ssh
sk = abcdefg
bind_addr = 127.0.0.1
bind_port = 6000

On machine C, connect to SSH on machine B, using this command:

ssh -oPort=6000 127.0.0.1

Features

Configuration Files

Read the full example configuration files to find out even more features not described here.

Full configuration file for frps (Server)

Full configuration file for frpc (Client)

Using Environment Variables

Environment variables can be referenced in the configuration file, using Go's standard format:

# frpc.ini
[common]
server_addr = {{ .Envs.FRP_SERVER_ADDR }}
server_port = 7000

[ssh]
type = tcp
local_ip = 127.0.0.1
local_port = 22
remote_port = {{ .Envs.FRP_SSH_REMOTE_PORT }}

With the config above, variables can be passed into frpc program like this:

export FRP_SERVER_ADDR="x.x.x.x"
export FRP_SSH_REMOTE_PORT="6000"
./frpc -c ./frpc.ini

frpc will render configuration file template using OS environment variables. Remember to prefix your reference with .Envs.

Split Configures Into Different Files

You can split multiple proxy configs into different files and include them in the main file.

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000
includes=./confd/*.ini
# ./confd/test.ini
[ssh]
type = tcp
local_ip = 127.0.0.1
local_port = 22
remote_port = 6000

Dashboard

Check frp's status and proxies' statistics information by Dashboard.

Configure a port for dashboard to enable this feature:

[common]
dashboard_port = 7500
# dashboard's username and password are both optional
dashboard_user = admin
dashboard_pwd = admin

Then visit http://[server_addr]:7500 to see the dashboard, with username and password both being admin.

Additionally, you can use HTTPS port by using your domains wildcard or normal SSL certificate:

[common]
dashboard_port = 7500
# dashboard's username and password are both optional
dashboard_user = admin
dashboard_pwd = admin
dashboard_tls_mode = true
dashboard_tls_cert_file = server.crt
dashboard_tls_key_file = server.key

Then visit https://[server_addr]:7500 to see the dashboard in secure HTTPS connection, with username and password both being admin.

dashboard

Admin UI

The Admin UI helps you check and manage frpc's configuration.

Configure an address for admin UI to enable this feature:

[common]
admin_addr = 127.0.0.1
admin_port = 7400
admin_user = admin
admin_pwd = admin

Then visit http://127.0.0.1:7400 to see admin UI, with username and password both being admin.

Monitor

When dashboard is enabled, frps will save monitor data in cache. It will be cleared after process restart.

Prometheus is also supported.

Prometheus

Enable dashboard first, then configure enable_prometheus = true in frps.ini.

http://{dashboard_addr}/metrics will provide prometheus monitor data.

Authenticating the Client

There are 2 authentication methods to authenticate frpc with frps.

You can decide which one to use by configuring authentication_method under [common] in frpc.ini and frps.ini.

Configuring authenticate_heartbeats = true under [common] will use the configured authentication method to add and validate authentication on every heartbeat between frpc and frps.

Configuring authenticate_new_work_conns = true under [common] will do the same for every new work connection between frpc and frps.

Token Authentication

When specifying authentication_method = token under [common] in frpc.ini and frps.ini - token based authentication will be used.

Make sure to specify the same token in the [common] section in frps.ini and frpc.ini for frpc to pass frps validation

OIDC Authentication

When specifying authentication_method = oidc under [common] in frpc.ini and frps.ini - OIDC based authentication will be used.

OIDC stands for OpenID Connect, and the flow used is called Client Credentials Grant.

To use this authentication type - configure frpc.ini and frps.ini as follows:

# frps.ini
[common]
authentication_method = oidc
oidc_issuer = https://example-oidc-issuer.com/
oidc_audience = https://oidc-audience.com/.default
# frpc.ini
[common]
authentication_method = oidc
oidc_client_id = 98692467-37de-409a-9fac-bb2585826f18 # Replace with OIDC client ID
oidc_client_secret = oidc_secret
oidc_audience = https://oidc-audience.com/.default
oidc_token_endpoint_url = https://example-oidc-endpoint.com/oauth2/v2.0/token

Encryption and Compression

The features are off by default. You can turn on encryption and/or compression:

# frpc.ini
[ssh]
type = tcp
local_port = 22
remote_port = 6000
use_encryption = true
use_compression = true

TLS

frp supports the TLS protocol between frpc and frps since v0.25.0.

For port multiplexing, frp sends a first byte 0x17 to dial a TLS connection.

Configure tls_enable = true in the [common] section to frpc.ini to enable this feature.

To enforce frps to only accept TLS connections - configure tls_only = true in the [common] section in frps.ini. This is optional.

frpc TLS settings (under the [common] section):

tls_enable = true
tls_cert_file = certificate.crt
tls_key_file = certificate.key
tls_trusted_ca_file = ca.crt

frps TLS settings (under the [common] section):

tls_only = true
tls_enable = true
tls_cert_file = certificate.crt
tls_key_file = certificate.key
tls_trusted_ca_file = ca.crt

You will need a root CA cert and at least one SSL/TLS certificate. It can be self-signed or regular (such as Let's Encrypt or another SSL/TLS certificate provider).

If you using frp via IP address and not hostname, make sure to set the appropriate IP address in the Subject Alternative Name (SAN) area when generating SSL/TLS Certificates.

Given an example:

  • Prepare openssl config file. It exists at /etc/pki/tls/openssl.cnf in Linux System and /System/Library/OpenSSL/openssl.cnf in MacOS, and you can copy it to current path, like cp /etc/pki/tls/openssl.cnf ./my-openssl.cnf. If not, you can build it by yourself, like:
cat > my-openssl.cnf << EOF
[ ca ]
default_ca = CA_default
[ CA_default ]
x509_extensions = usr_cert
[ req ]
default_bits        = 2048
default_md          = sha256
default_keyfile     = privkey.pem
distinguished_name  = req_distinguished_name
attributes          = req_attributes
x509_extensions     = v3_ca
string_mask         = utf8only
[ req_distinguished_name ]
[ req_attributes ]
[ usr_cert ]
basicConstraints       = CA:FALSE
nsComment              = "OpenSSL Generated Certificate"
subjectKeyIdentifier   = hash
authorityKeyIdentifier = keyid,issuer
[ v3_ca ]
subjectKeyIdentifier   = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints       = CA:true
EOF
  • build ca certificates:
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=example.ca.com" -days 5000 -out ca.crt
  • build frps certificates:
openssl genrsa -out server.key 2048

openssl req -new -sha256 -key server.key \
    -subj "/C=XX/ST=DEFAULT/L=DEFAULT/O=DEFAULT/CN=server.com" \
    -reqexts SAN \
    -config <(cat my-openssl.cnf <(printf "\n[SAN]\nsubjectAltName=DNS:localhost,IP:127.0.0.1,DNS:example.server.com")) \
    -out server.csr

openssl x509 -req -days 365 -sha256 \
	-in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
	-extfile <(printf "subjectAltName=DNS:localhost,IP:127.0.0.1,DNS:example.server.com") \
	-out server.crt
  • build frpc certificates:
openssl genrsa -out client.key 2048
openssl req -new -sha256 -key client.key \
    -subj "/C=XX/ST=DEFAULT/L=DEFAULT/O=DEFAULT/CN=client.com" \
    -reqexts SAN \
    -config <(cat my-openssl.cnf <(printf "\n[SAN]\nsubjectAltName=DNS:client.com,DNS:example.client.com")) \
    -out client.csr

openssl x509 -req -days 365 -sha256 \
    -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
	-extfile <(printf "subjectAltName=DNS:client.com,DNS:example.client.com") \
	-out client.crt

Hot-Reloading frpc configuration

The admin_addr and admin_port fields are required for enabling HTTP API:

# frpc.ini
[common]
admin_addr = 127.0.0.1
admin_port = 7400

Then run command frpc reload -c ./frpc.ini and wait for about 10 seconds to let frpc create or update or remove proxies.

Note that parameters in [common] section won't be modified except 'start'.

You can run command frpc verify -c ./frpc.ini before reloading to check if there are config errors.

Get proxy status from client

Use frpc status -c ./frpc.ini to get status of all proxies. The admin_addr and admin_port fields are required for enabling HTTP API.

Only allowing certain ports on the server

allow_ports in frps.ini is used to avoid abuse of ports:

# frps.ini
[common]
allow_ports = 2000-3000,3001,3003,4000-50000

allow_ports consists of specific ports or port ranges (lowest port number, dash -, highest port number), separated by comma ,.

Port Reuse

vhost_http_port and vhost_https_port in frps can use same port with bind_port. frps will detect the connection's protocol and handle it correspondingly.

We would like to try to allow multiple proxies bind a same remote port with different protocols in the future.

Bandwidth Limit

For Each Proxy

# frpc.ini
[ssh]
type = tcp
local_port = 22
remote_port = 6000
bandwidth_limit = 1MB

Set bandwidth_limit in each proxy's configure to enable this feature. Supported units are MB and KB.

TCP Stream Multiplexing

frp supports tcp stream multiplexing since v0.10.0 like HTTP2 Multiplexing, in which case all logic connections to the same frpc are multiplexed into the same TCP connection.

You can disable this feature by modify frps.ini and frpc.ini:

# frps.ini and frpc.ini, must be same
[common]
tcp_mux = false

Support KCP Protocol

KCP is a fast and reliable protocol that can achieve the transmission effect of a reduction of the average latency by 30% to 40% and reduction of the maximum delay by a factor of three, at the cost of 10% to 20% more bandwidth wasted than TCP.

KCP mode uses UDP as the underlying transport. Using KCP in frp:

Enable KCP in frps:

# frps.ini
[common]
bind_port = 7000
# Specify a UDP port for KCP.
kcp_bind_port = 7000

The kcp_bind_port number can be the same number as bind_port, since bind_port field specifies a TCP port.

Configure frpc.ini to use KCP to connect to frps:

# frpc.ini
[common]
server_addr = x.x.x.x
# Same as the 'kcp_bind_port' in frps.ini
server_port = 7000
protocol = kcp

Connection Pooling

By default, frps creates a new frpc connection to the backend service upon a user request. With connection pooling, frps keeps a certain number of pre-established connections, reducing the time needed to establish a connection.

This feature is suitable for a large number of short connections.

Configure the limit of pool count each proxy can use in frps.ini:

# frps.ini
[common]
max_pool_count = 5

Enable and specify the number of connection pool:

# frpc.ini
[common]
pool_count = 1

Load balancing

Load balancing is supported by group.

This feature is only available for types tcp, http, tcpmux now.

# frpc.ini
[test1]
type = tcp
local_port = 8080
remote_port = 80
group = web
group_key = 123

[test2]
type = tcp
local_port = 8081
remote_port = 80
group = web
group_key = 123

group_key is used for authentication.

Connections to port 80 will be dispatched to proxies in the same group randomly.

For type tcp, remote_port in the same group should be the same.

For type http, custom_domains, subdomain, locations should be the same.

Service Health Check

Health check feature can help you achieve high availability with load balancing.

Add health_check_type = tcp or health_check_type = http to enable health check.

With health check type tcp, the service port will be pinged (TCPing):

# frpc.ini
[test1]
type = tcp
local_port = 22
remote_port = 6000
# Enable TCP health check
health_check_type = tcp
# TCPing timeout seconds
health_check_timeout_s = 3
# If health check failed 3 times in a row, the proxy will be removed from frps
health_check_max_failed = 3
# A health check every 10 seconds
health_check_interval_s = 10

With health check type http, an HTTP request will be sent to the service and an HTTP 2xx OK response is expected:

# frpc.ini
[web]
type = http
local_ip = 127.0.0.1
local_port = 80
custom_domains = test.example.com
# Enable HTTP health check
health_check_type = http
# frpc will send a GET request to '/status'
# and expect an HTTP 2xx OK response
health_check_url = /status
health_check_timeout_s = 3
health_check_max_failed = 3
health_check_interval_s = 10

Rewriting the HTTP Host Header

By default frp does not modify the tunneled HTTP requests at all as it's a byte-for-byte copy.

However, speaking of web servers and HTTP requests, your web server might rely on the Host HTTP header to determine the website to be accessed. frp can rewrite the Host header when forwarding the HTTP requests, with the host_header_rewrite field:

# frpc.ini
[web]
type = http
local_port = 80
custom_domains = test.example.com
host_header_rewrite = dev.example.com

The HTTP request will have the the Host header rewritten to Host: dev.example.com when it reaches the actual web server, although the request from the browser probably has Host: test.example.com.

Setting other HTTP Headers

Similar to Host, You can override other HTTP request headers with proxy type http.

# frpc.ini
[web]
type = http
local_port = 80
custom_domains = test.example.com
host_header_rewrite = dev.example.com
header_X-From-Where = frp

Note that parameter(s) prefixed with header_ will be added to HTTP request headers.

In this example, it will set header X-From-Where: frp in the HTTP request.

Get Real IP

HTTP X-Forwarded-For

This feature is for http proxy only.

You can get user's real IP from HTTP request headers X-Forwarded-For.

Proxy Protocol

frp supports Proxy Protocol to send user's real IP to local services. It support all types except UDP.

Here is an example for https service:

# frpc.ini
[web]
type = https
local_port = 443
custom_domains = test.example.com

# now v1 and v2 are supported
proxy_protocol_version = v2

You can enable Proxy Protocol support in nginx to expose user's real IP in HTTP header X-Real-IP, and then read X-Real-IP header in your web service for the real IP.

Require HTTP Basic Auth (Password) for Web Services

Anyone who can guess your tunnel URL can access your local web server unless you protect it with a password.

This enforces HTTP Basic Auth on all requests with the username and password specified in frpc's configure file.

It can only be enabled when proxy type is http.

# frpc.ini
[web]
type = http
local_port = 80
custom_domains = test.example.com
http_user = abc
http_pwd = abc

Visit http://test.example.com in the browser and now you are prompted to enter the username and password.

Custom Subdomain Names

It is convenient to use subdomain configure for http and https types when many people share one frps server.

# frps.ini
subdomain_host = frps.com

Resolve *.frps.com to the frps server's IP. This is usually called a Wildcard DNS record.

# frpc.ini
[web]
type = http
local_port = 80
subdomain = test

Now you can visit your web service on test.frps.com.

Note that if subdomain_host is not empty, custom_domains should not be the subdomain of subdomain_host.

URL Routing

frp supports forwarding HTTP requests to different backend web services by url routing.

locations specifies the prefix of URL used for routing. frps first searches for the most specific prefix location given by literal strings regardless of the listed order.

# frpc.ini
[web01]
type = http
local_port = 80
custom_domains = web.example.com
locations = /

[web02]
type = http
local_port = 81
custom_domains = web.example.com
locations = /news,/about

HTTP requests with URL prefix /news or /about will be forwarded to web02 and other requests to web01.

TCP Port Multiplexing

frp supports receiving TCP sockets directed to different proxies on a single port on frps, similar to vhost_http_port and vhost_https_port.

The only supported TCP port multiplexing method available at the moment is httpconnect - HTTP CONNECT tunnel.

When setting tcpmux_httpconnect_port to anything other than 0 in frps under [common], frps will listen on this port for HTTP CONNECT requests.

The host of the HTTP CONNECT request will be used to match the proxy in frps. Proxy hosts can be configured in frpc by configuring custom_domain and / or subdomain under type = tcpmux proxies, when multiplexer = httpconnect.

For example:

# frps.ini
[common]
bind_port = 7000
tcpmux_httpconnect_port = 1337
# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000

[proxy1]
type = tcpmux
multiplexer = httpconnect
custom_domains = test1
local_port = 80

[proxy2]
type = tcpmux
multiplexer = httpconnect
custom_domains = test2
local_port = 8080

In the above configuration - frps can be contacted on port 1337 with a HTTP CONNECT header such as:

CONNECT test1 HTTP/1.1\r\n\r\n

and the connection will be routed to proxy1.

Connecting to frps via HTTP PROXY

frpc can connect to frps using HTTP proxy if you set OS environment variable HTTP_PROXY, or if http_proxy is set in frpc.ini file.

It only works when protocol is tcp.

# frpc.ini
[common]
server_addr = x.x.x.x
server_port = 7000
http_proxy = http://user:pwd@192.168.1.128:8080

Range ports mapping

Proxy with names that start with range: will support mapping range ports.

# frpc.ini
[range:test_tcp]
type = tcp
local_ip = 127.0.0.1
local_port = 6000-6006,6007
remote_port = 6000-6006,6007

frpc will generate 8 proxies like test_tcp_0, test_tcp_1, ..., test_tcp_7.

Client Plugins

frpc only forwards requests to local TCP or UDP ports by default.

Plugins are used for providing rich features. There are built-in plugins such as unix_domain_socket, http_proxy, socks5, static_file, http2https, https2http, https2https and you can see example usage.

Specify which plugin to use with the plugin parameter. Configuration parameters of plugin should be started with plugin_. local_ip and local_port are not used for plugin.

Using plugin http_proxy:

# frpc.ini
[http_proxy]
type = tcp
remote_port = 6000
plugin = http_proxy
plugin_http_user = abc
plugin_http_passwd = abc

plugin_http_user and plugin_http_passwd are configuration parameters used in http_proxy plugin.

Server Manage Plugins

Read the document.

Find more plugins in gofrp/plugin.

Development Plan

  • Log HTTP request information in frps.

Contributing

Interested in getting involved? We would like to help you!

  • Take a look at our issues list and consider sending a Pull Request to dev branch.
  • If you want to add a new feature, please create an issue first to describe the new feature, as well as the implementation approach. Once a proposal is accepted, create an implementation of the new features and submit it as a pull request.
  • Sorry for my poor English. Improvements for this document are welcome, even some typo fixes.
  • If you have great ideas, send an email to fatedier@gmail.com.

Note: We prefer you to give your advise in issues, so others with a same question can search it quickly and we don't need to answer them repeatedly.

Download Details:

Author: Fatedier
Source Code: https://github.com/fatedier/frp 
License: Apache-2.0 license

#go #golang #proxy 

A Fast Reverse Proxy to Help You Expose A Local Server Behind A NAT
Lawrence  Lesch

Lawrence Lesch

1662433080

Hotel: A Simple Process Manager for Developers

hotel

Start apps from your browser and use local domains/https automatically

Tip: if you don't enable local domains, hotel can still be used as a catalog of local servers.

Hotel works great on any OS (macOS, Linux, Windows) and with all servers ❤️

  • Node (Express, Webpack)
  • PHP (Laravel, Symfony)
  • Ruby (Rails, Sinatra, Jekyll)
  • Python (Django)
  • Docker
  • Go
  • Apache, Nginx
  • ...

To all the amazing people who have answered the Hotel survey, thanks so much <3 !

v0.8.0 upgrade

.localhost replaces .dev local domain and is the new default. See https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/ for context.

If you're upgrading, please be sure to:

  1. Remove "tld": "dev" from your ~/.hotel/conf.json file
  2. Run hotel stop && hotel start
  3. Refresh your network settings

Support

If you are benefiting from hotel, you can support its development on Patreon.

You can view the list of Supporters here https://thanks.typicode.com.

Video

Features

  • Local domains - http://project.localhost
  • HTTPS via local self-signed SSL certificate - https://project.localhost
  • Wildcard subdomains - http://*.project.localhost
  • Works everywhere - macOS, Linux and Windows
  • Works with any server - Node, Ruby, PHP, ...
  • Proxy - Map local domains to remote servers
  • System-friendly - No messing with port 80, /etc/hosts, sudo or additional software
  • Fallback URL - http://localhost:2000/project
  • Servers are only started when you access them
  • Plays nice with other servers (Apache, Nginx, ...)
  • Random or fixed ports

Install

npm install -g hotel && hotel start

Hotel requires Node to be installed, if you don't have it, you can simply install it using one of the following method:

You can also visit https://nodejs.org.

Quick start

Local domains (optional)

To use local .localhost domains, you need to configure your network or browser to use hotel's proxy auto-config file or you can skip this step for the moment and go directly to http://localhost:2000

See instructions here.

Add your servers

# Add your server to hotel
~/projects/one$ hotel add 'npm start'
# Or start your server in the terminal as usual and get a temporary local domain
~/projects/two$ hotel run 'npm start' 

Visit localhost:2000 or http(s)://hotel.localhost.

Alternatively you can directly go to

http://localhost:2000/one
http://localhost:2000/two
http(s)://one.localhost
http(s)://two.localhost 

Popular servers examples

Using other servers? Here are some examples to get you started :)

hotel add 'ember server'                               # Ember
hotel add 'jekyll serve --port $PORT'                  # Jekyll
hotel add 'rails server -p $PORT -b 127.0.0.1'         # Rails
hotel add 'python -m SimpleHTTPServer $PORT'           # static file server (Python)
hotel add 'php -S 127.0.0.1:$PORT'                     # PHP
hotel add 'docker-compose up'                          # docker-compose
hotel add 'python manage.py runserver 127.0.0.1:$PORT' # Django
# ...

On Windows use "%PORT%" instead of '$PORT'

See a Docker example here..

Proxy requests to remote servers

Add your remote servers

~$ hotel add http://192.168.1.12:1337 --name aliased-address
~$ hotel add http://google.com --name aliased-domain 

You can now access them using

http://aliased-address.localhost # will proxy requests to http://192.168.1.12:1337
http://aliased-domain.localhost # will proxy requests to http://google.com

CLI usage and options

hotel add <cmd|url> [opts]
hotel run <cmd> [opts]

# Examples

hotel add 'nodemon app.js' --out dev.log  # Set output file (default: none)
hotel add 'nodemon app.js' --name name    # Set custom name (default: current dir name)
hotel add 'nodemon app.js' --port 3000    # Set a fixed port (default: random port)
hotel add 'nodemon app.js' --env PATH     # Store PATH environment variable in server config
hotel add http://192.168.1.10 --name app  # map local domain to URL

hotel run 'nodemon app.js'                # Run server and get a temporary local domain

# Other commands

hotel ls     # List servers
hotel rm     # Remove server
hotel start  # Start hotel daemon
hotel stop   # Stop hotel daemon

To get help

hotel --help
hotel --help <cmd>

Port

For hotel to work, your servers need to listen on the PORT environment variable. Here are some examples showing how you can do it from your code or the command-line:

var port = process.env.PORT || 3000
server.listen(port)
hotel add 'cmd -p $PORT'  # OS X, Linux
hotel add "cmd -p %PORT%" # Windows

Fallback URL

If you're offline or can't configure your browser to use .localhost domains, you can always access your local servers by going to localhost:2000.

Configurations, logs and self-signed SSL certificate

You can find hotel related files in ~/.hotel :

~/.hotel/conf.json
~/.hotel/daemon.log
~/.hotel/daemon.pid
~/.hotel/key.pem
~/.hotel/cert.pem
~/.hotel/servers/<app-name>.json

By default, hotel uses the following configuration values:

{
  "port": 2000,
  "host": '127.0.0.1',
  
  // Timeout when proxying requests to local domains
  "timeout": 5000,
  
  // Change this if you want to use another tld than .localhost
  "tld": 'localhost', 
  
  // If you're behind a corporate proxy, replace this with your network proxy IP (example: "1.2.3.4:5000")
  "proxy": false
}

To override a value, simply add it to ~/.hotel/conf.json and run hotel stop && hotel start

Third-party tools

FAQ

Setting a fixed port

hotel add --port 3000 'server-cmd $PORT' 

Adding X-Forwarded-* headers to requests

hotel add --xfwd 'server-cmd'

Setting HTTP_PROXY env

Use --http-proxy-env flag when adding your server or edit your server configuration in ~/.hotel/servers

hotel add --http-proxy-env 'server-cmd'

Proxying requests to a remote https server

hotel add --change-origin 'https://jsonplaceholder.typicode.com'

When proxying to a https server, you may get an error because your .localhost domain doesn't match the host defined in the server certificate. With this flag, host header is changed to match the target URL.

ENOSPC and EACCES errors

If you're seeing one of these errors in ~/.hotel/daemon.log, this usually means that there's some permissions issues. hotel daemon should be started without sudo and ~/.hotel should belong to $USER.

# to fix permissions
sudo chown -R $USER: $HOME/.hotel

See also, https://docs.npmjs.com/getting-started/fixing-npm-permissions

Configuring a network proxy IP

If you're behind a corporate proxy, replace "proxy" with your network proxy IP in ~/.hotel/conf.json. For example:

{
  "proxy": "1.2.3.4:5000"
}

Download Details:

Author: Typicode
Source Code: https://github.com/typicode/hotel 
License: MIT license

#javascript #front #local #https #proxy 

Hotel: A Simple Process Manager for Developers
Reid  Rohan

Reid Rohan

1661800560

Redbird: A Modern Reverse Proxy for Node

Redbird Reverse Proxy

With built-in Cluster, HTTP2, LetsEncrypt and Docker support

It should be easy and robust to handle dynamic virtual hosts, load balancing, proxying web sockets and SSL encryption.

With Redbird you get a complete library to build dynamic reverse proxies with the speed and robustness of http-proxy.

This light-weight package includes everything you need for easy reverse routing of your applications. Great for routing many applications from different domains in one single host, handling SSL with ease, etc.

Developed by manast 

SUPER HOT

Support for HTTP2. You can now enable HTTP2 just by setting the HTTP2 flag to true. Keep in mind that HTTP2 requires SSL/TLS certificates. Thankfully we also support LetsEncrypt so this becomes easy as pie.

HOT

We have now support for automatic generation of SSL certificates using LetsEncrypt. Zero config setup for your TLS protected services that just works.

Features

  • Flexible and easy routing
  • Websockets
  • Seamless SSL Support (HTTPS -> HTTP proxy)
  • Automatic HTTP to HTTPS redirects
  • Automatic TLS Certificates generation and renewal
  • Load balancer
  • Register and unregister routes programmatically without restart (allows zero downtime deployments)
  • Docker support for automatic registration of running containers
  • Cluster support that enables automatic multi-process
  • Based on top of rock-solid node-http-proxy and battle tested on production in many sites
  • Optional logging based on bunyan

Install

npm install redbird

Example

You can programmatically register or unregister routes dynamically even if the proxy is already running:

var proxy = require('redbird')({port: 80});

// OPTIONAL: Setup your proxy but disable the X-Forwarded-For header
var proxy = require('redbird')({port: 80, xfwd: false});

// Route to any global ip
proxy.register("optimalbits.com", "http://167.23.42.67:8000");

// Route to any local ip, for example from docker containers.
proxy.register("example.com", "http://172.17.42.1:8001");

// Route from hostnames as well as paths
proxy.register("example.com/static", "http://172.17.42.1:8002");
proxy.register("example.com/media", "http://172.17.42.1:8003");

// Subdomains, paths, everything just works as expected
proxy.register("abc.example.com", "http://172.17.42.4:8080");
proxy.register("abc.example.com/media", "http://172.17.42.5:8080");

// Route to any href including a target path
proxy.register("foobar.example.com", "http://172.17.42.6:8080/foobar");

// You can also enable load balancing by registering the same hostname with different
// target hosts. The requests will be evenly balanced using a Round-Robin scheme.
proxy.register("balance.me", "http://172.17.40.6:8080");
proxy.register("balance.me", "http://172.17.41.6:8080");
proxy.register("balance.me", "http://172.17.42.6:8080");
proxy.register("balance.me", "http://172.17.43.6:8080");

// You can unregister routes as well
proxy.register("temporary.com", "http://172.17.45.1:8004");
proxy.unregister("temporary.com", "http://172.17.45.1:8004");

// LetsEncrypt support
// With Redbird you can get zero conf and automatic SSL certificates for your domains
redbird.register('example.com', 'http://172.60.80.2:8082', {
  ssl: {
    letsencrypt: {
      email: 'john@example.com', // Domain owner/admin email
      production: true, // WARNING: Only use this flag when the proxy is verified to work correctly to avoid being banned!
    }
  }
});

//
// LetsEncrypt requires a minimal web server for handling the challenges, this is by default on port 3000
// it can be configured when initiating the proxy. This web server is only used by Redbird internally so most of the time
// you  do not need to do anything special other than avoid having other web services in the same host running
// on the same port.

//
// HTTP2 Support using LetsEncrypt for the certificates
//
var proxy = require('redbird')({
  port: 80, // http port is needed for LetsEncrypt challenge during request / renewal. Also enables automatic http->https redirection for registered https routes.
  letsencrypt: {
    path: __dirname + '/certs',
    port: 9999 // LetsEncrypt minimal web server port for handling challenges. Routed 80->9999, no need to open 9999 in firewall. Default 3000 if not defined.
  },
  ssl: {
    http2: true,
    port: 443, // SSL port used to serve registered https routes with LetsEncrypt certificate.
  }
});

About HTTPS

The HTTPS proxy supports virtual hosts by using SNI (which most modern browsers support: IE7 and above). The proxying is performed by hostname, so you must use the same SSL certificates for a given hostname independently of its paths.

LetsEncrypt

Some important considerations when using LetsEncrypt. You need to agree to LetsEncrypt terms of service. When using LetsEncrypt, the obtained certificates will be copied to disk to the specified path. Its your responsibility to backup, or save persistently when applicable. Keep in mind that these certificates needs to be handled with care so that they cannot be accessed by malicious users. The certificates will be renewed every 2 months automatically forever.

HTTPS Example

(NOTE: This is a legacy example not needed when using LetsEncrypt)

Conceptually HTTPS is easy, but it is also easy to struggle getting it right. With Redbird its straightforward, check this complete example:

Generate a localhost development SSL certificate:

/certs $ openssl genrsa -out dev-key.pem 1024
/certs $ openssl req -new -key dev-key.pem -out dev-csr.pem

// IMPORTANT: Do not forget to fill the field! Common Name (e.g. server FQDN or YOUR name) []:localhost

/certs $ openssl x509 -req -in dev-csr.pem -signkey dev-key.pem -out dev-cert.pem

Note: For production sites you need to buy valid SSL certificates from a trusted authority.

Create a simple redbird based proxy:

var redbird = new require('redbird')({
    port: 8080,

    // Specify filenames to default SSL certificates (in case SNI is not supported by the
    // user's browser)
    ssl: {
        port: 8443,
        key: "certs/dev-key.pem",
        cert: "certs/dev-cert.pem",
    }
});

// Since we will only have one https host, we dont need to specify additional certificates.
redbird.register('localhost', 'http://localhost:8082', {ssl: true});

Test it:

Point your browser to localhost:8000 and you will see how it automatically redirects to your https server and proxies you to your target server.

You can define many virtual hosts, each with its own SSL certificate. And if you do not define any, they will use the default one as in the example above:

redbird.register('example.com', 'http://172.60.80.2:8082', {
    ssl: {
        key: "../certs/example.key",
        cert: "../certs/example.crt",
        ca: "../certs/example.ca"
    }
});

redbird.register('foobar.com', 'http://172.60.80.3:8082', {
    ssl: {
        key: "../certs/foobar.key",
        cert: "../certs/foobar.crt",
    }
});

You can also specify https hosts as targets and also specify if you want the connection to the target host to be secure (default is true).

var redbird = require('redbird')({
    port: 80,
    secure: false,
    ssl: {
        port: 443,
        key: "../certs/default.key",
        cert: "../certs/default.crt",
    }
});
redbird.register('tutorial.com', 'https://172.60.80.2:8083', {
    ssl: {
        key: "../certs/tutorial.key",
        cert: "../certs/tutorial.crt",
    }
});

Edge case scenario: you have an HTTPS server with two IP addresses assigned to it and your clients use old software without SNI support. In this case, both IP addresses will receive the same fallback certificate. I.e. some of the domains will get a wrong certificate. To handle this case you can create two HTTPS servers each one bound to its own IP address and serving the appropriate certificate.

var redbird = new require('redbird')({
    port: 8080,

    // Specify filenames to default SSL certificates (in case SNI is not supported by the
    // user's browser)
    ssl: [
        {
            port: 443,
            ip: '123.45.67.10',  // assigned to tutorial.com
            key: 'certs/tutorial.key',
            cert: 'certs/tutorial.crt',
        },
        {
            port: 443,
            ip: '123.45.67.11', // assigned to my-other-domain.com
            key: 'certs/my-other-domain.key',
            cert: 'certs/my-other-domain.crt',
        }
    ]
});

// These certificates will be served if SNI is supported
redbird.register('tutorial.com', 'http://192.168.0.10:8001', {
    ssl: {
        key: 'certs/tutorial.key',
        cert: 'certs/tutorial.crt',
    }
});
redbird.register('my-other-domain.com', 'http://192.168.0.12:8001', {
    ssl: {
        key: 'certs/my-other-domain.key',
        cert: 'certs/my-other-domain.crt',
    }
});

Docker support

If you use docker, you can tell Redbird to automatically register routes based on image names. You register your image name and then every time a container starts from that image, it gets registered, and unregistered if the container is stopped. If you run more than one container from the same image, Redbird will load balance following a round-robin algorithm:

var redbird = require('redbird')({
  port: 8080,
});

var docker = require('redbird').docker;
docker(redbird).register("old.api.com", 'company/api:v1.0.0');
docker(redbird).register("stable.api.com", 'company/api:v2.*');
docker(redbird).register("preview.api.com", 'company/api:v[3-9].*');

etcd backend

Redbird can use node-etcd to automatically create proxy records from an etcd cluster. Configuration is accomplished by passing an array of options, plus the hosts and path variables, which define which etcd cluster hosts, and which directory within those hosts, that Redbird should poll for updates.

var redbird = require('redbird')({
  port:8080
});

var options = {
  hosts: ['localhost:2379'], // REQUIRED - you must define array of cluster hosts
    path: ['redbird'], // OPTIONAL - path to etcd keys
    ... // OPTIONAL - pass in node-etcd connection options
}
require('redbird').etcd(redbird,options);

etcd records can be created in one of two ways, either as a target destination pair: /redbird/example.com "8.8.8.8" or by passing a JSON object containing multiple hosts, and Redbird options:

/redbird/derek.com                { "hosts" : ["10.10.10.10", "11.11.11.11"]}
/redbird/johnathan.com    { "ssl" : true }
/redbird/jeff.com         { "docker" : "alpine/alpine:latest" }

Cluster support

Redbird supports automatic node cluster generation. To use, just specify the number of processes that you want Redbird to use in the options object. Redbird will automatically restart any thread that crashes, increasing reliability.

var redbird = new require('redbird')({
    port: 8080,
  cluster: 4
});

NTLM support

If you need NTLM support, you can tell Redbird to add the required header handler. This registers a response handler which makes sure the NTLM auth header is properly split into two entries from http-proxy.

var redbird = new require('redbird')({
  port: 8080,
  ntlm: true
});

Custom Resolvers

With custom resolvers, you can decide how the proxy server handles request. Custom resolvers allow you to extend Redbird considerably. With custom resolvers, you can perform the following:

  • Do path-based routing.
  • Do headers based routing.
  • Do wildcard domain routing.
  • Use variable upstream servers based on availability, for example in conjunction with Etcd or any other service discovery platform.
  • And more.

Resolvers should be:

  1. Be invokable function. The this context of such function is the Redbird Proxy object. The resolver function takes in two parameters : host and url
  2. Have a priority, resolvers with higher priorities are called before those of lower priorities. The default resolver, has a priority of 0.
  3. A resolver should return a route object or a string when matches it matches the parameters passed in. If string is returned, then it must be a valid upstream URL, if object, then the object must conform to the following:
  {
     url: string or array of string [required], when array, the urls will be load-balanced across.
     path: path prefix for route, [optional], defaults to '/',
     opts: {} // Redbird target options, see Redbird.register() [optional],
  }

Defining Resolvers

Resolvers can be defined when initializing the proxy object with the resolvers parameter. An example is below:

 // for every URL path that starts with /api/, send request to upstream API service
 var customResolver1 = function(host, url, req) {
   if(/^\/api\//.test(url)){
      return 'http://127.0.0.1:8888';
   }
 };

 // assign high priority
 customResolver1.priority = 100;

 var proxy = new require('redbird')({
    port: 8080,
    resolvers: [
    customResolver1,
    // uses the same priority as default resolver, so will be called after default resolver
    function(host, url, req) {
      if(/\.example\.com/.test(host)){
        return 'http://127.0.0.1:9999'
      }
    }]
 })

Adding and Removing Resolvers at Runtime.

You can add or remove resolvers at runtime, this is useful in situations where your upstream is tied to a service discovery service system.

var topPriority = function(host, url, req) {
  return /app\.example\.com/.test(host) ? {
    // load balanced
    url: [
    'http://127.0.0.1:8000',
    'http://128.0.1.1:9999'
   ]
  } : null;
};

topPriority.priority = 200;
proxy.addResolver(topPriority);


// remove top priority after 10 minutes,
setTimeout(function() {
  proxy.removeResolver(topPriority);
}, 600000);

Replacing the default HTTP/HTTPS server modules

By passing serverModule: module or ssl: {serverModule : module} you can override the default http/https servers used to listen for connections with another module.

One application for this is to enable support for PROXY protocol: This is useful if you want to use a module like findhit-proxywrap to enable support for the PROXY protocol.

PROXY protocol is used in tools like HA-Proxy, and can be optionally enabled in Amazon ELB load balancers to pass the original client IP when proxying TCP connections (similar to an X-Forwarded-For header, but for raw TCP). This is useful if you want to run redbird on AWS behind an ELB load balancer, but have redbird terminate any HTTPS connections so you can have SNI/Let's Encrypt/HTTP2support. With this in place Redbird will see the client's IP address rather than the load-balancer's, and pass this through in an X-Forwarded-For header.

//Options for proxywrap. This means the proxy will also respond to regular HTTP requests without PROXY information as well.
proxy_opts = {strict: false};
proxyWrap = require('findhit-proxywrap');
var opts = {
    port: process.env.HTTP_PORT,
    serverModule: proxyWrap.proxy( require('http'), proxy_opts),
    ssl: {
        //Do this if you want http2:
        http2: true,
        serverModule: proxyWrap.proxy(require('spdy').server, proxy_opts),
        //Do this if you only want regular https
        // serverModule: proxyWrap.proxy( require('http'), proxy_opts),
        port: process.env.HTTPS_PORT,
    }
}

// Create the proxy
var proxy = require('redbird')(opts);

Roadmap

  • Statistics (number of connections, load, response times, etc)
  • CORS support.
  • Rate limiter.
  • Simple IP Filtering.
  • Automatic routing via Redis.

Reference

constructor register unregister notFound close

Redbird(opts)

This is the Proxy constructor. Creates a new Proxy and starts listening to the given port.

Arguments

    opts {Object} Options to pass to the proxy:
    {
        port: {Number} // port number that the proxy will listen to.
        ssl: { // Optional SSL proxying.
            port: {Number} // SSL port the proxy will listen to.
            // Default certificates
            key: keyPath,
            cert: certPath,
            ca: caPath // Optional.
            redirectPort: port, // optional https port number to be redirected if entering using http.
            http2: false, //Optional, setting to true enables http2/spdy support
            serverModule : require('https') // Optional, override the https server module used to listen for https or http2 connections.  Default is require('https') or require('spdy')
        }
        bunyan: {Object} Bunyan options. Check [bunyan](https://github.com/trentm/node-bunyan) for info.
        If you want to disable bunyan, just set this option to false. Keep in mind that
        having logs enabled incours in a performance penalty of about one order of magnitude per request.
        resolvers: {Function | Array}  a list of custom resolvers. Can be a single function or an array of functions. See more details about resolvers above.
        serverModule : {Module} Optional - Override the http server module used to listen for http connections.  Default is require('http')
    }

Redbird::register(src, target, opts)

Register a new route. As soon as this method is called, the proxy will start routing the sources to the given targets.

Arguments

    src {String} {String|URL} A string or a url parsed by node url module.
        Note that port is ignored, since the proxy just listens to one port.

    target {String|URL} A string or a url parsed by node url module.
    opts {Object} route options:
    examples:
    {ssl : true} // Will use default ssl certificates.
    {ssl: {
        redirect: true, // False to disable HTTPS autoredirect to this route.
        key: keyPath,
        cert: certPath,
        ca: caPath, // optional
        secureOptions: constants.SSL_OP_NO_TLSv1 //optional, see below
        }
    }
    {onRequest: (req, res, target) => {
      // called before forwarding is occurred, you can modify req.headers for example
      // return undefined to forward to default target
    }}

Note: if you need to use ssl.secureOptions, to disable older, insecure TLS versions, import crypto/constants first:

const { constants } = require('crypto')


Redbird.unregister(src, [target])

Unregisters a route. After calling this method, the given route will not be proxied anymore.

Arguments

    src {String|URL} A string or a url parsed by node url module.
    target {String|URL} A string or a url parsed by node url module. If not
    specified, it will unregister all routes for the given source.

Redbird.notFound(callback)

Gives Redbird a callback function with two parameters, the HTTP request and response objects, respectively, which will be called when a proxy route is not found. The default is

    function(req, res){
      res.statusCode = 404;
      res.write('Not Found');
      res.end();
    };

.

Arguments

    src {Function(req, res)} The callback which will be called with the HTTP
      request and response objects when a proxy route is not found.

Redbird.close()

Close the proxy stopping all the incoming connections.


Download Details:

Author: OptimalBits
Source Code: https://github.com/OptimalBits/redbird 
License: BSD-2-Clause license

#javascript #node #proxy 

Redbird: A Modern Reverse Proxy for Node
Reid  Rohan

Reid Rohan

1661761584

Betwixt: Web Debugging Proxy Based on Chrome Devtools Network Panel

Betwixt

Betwixt will help you analyze web traffic outside the browser using familiar Chrome DevTools interface.

Betwixt in action

Even more Betwixt action!

Installing

Download the latest release for your operating system, build your own bundle or run Betwixt from the source code.

Setting up

In order to capture traffic, you'll have to direct it to the proxy created by Betwixt in the background (http://localhost:8008).

If you wish to analyze traffic system wide:

  • on macOS - System Preferences → Network → Advanced → Proxies → Web Proxy (HTTP)
  • on Windows - Settings → Network & Internet → Proxy
  • on Ubuntu - All Settings → Network → Network Proxy

Setting up proxy on Windows 10 and macOS

If you want to capture traffic coming from a single terminal use export http_proxy=http://localhost:8008.

Capturing encrypted traffic (HTTPS) requires additional step, see this doc for instructions.

Contributing

All contributors are very welcome. See CONTRIBUTING.md fore more details.

Download Details:

Author: Kdzwinel
Source Code: https://github.com/kdzwinel/betwixt 
License: MIT license

#javascript #electron #devtool #proxy 

Betwixt: Web Debugging Proxy Based on Chrome Devtools Network Panel

Requesthub: Receive, Log, and Proxy HTTP Requests

RequestHub

Receive HTTP requests, display them in your browser, and forward them to other URLs.

RequestHub is an open source project inspired by RequestBin

RequestHub

Overview

I originally developed this to help test and debug webhook systems, but it later found use in our organization to maximize our limited pool of public IPs. We map all of our external service webhooks to one IP, and forward them to internal testing servers. I thought others would have a use for something like this, so I decided to release it as open source software.

Installation

Install

$ go get github.com/kyledayton/requesthub/...

Run

$ export PATH=$PATH:$GOPATH/bin
$ requesthub

This will start the server on port 54321.
There are also a few command line options available:

$ requesthub -h
Usage of requesthub:
  -config="": YAML Configuration File
  -noweb=false: Disable the web UI
  -p=54321: which port to bind to
  -password="": HTTP Basic Auth Password for accessing hub
  -r=256: max requests to store
  -username="": HTTP Basic Auth Username for accessing hub

Note: To Enable Basic Auth, you must specify both username and password.

Usage

Open http://localhost:54321 in your browser. The index page shows a list of your hubs, and a form for creating a hub. Create a hub and it will redirect you to the hub requests page.

To send requests to the hub, send any HTTP request to http://localhost:54321/<HUB_NAME>

The hub requests page shows stored requests sent to the hub. There is a clear button, which will delete all stored requests in the hub. In addition, there is a form for setting the forwarding URL of the hub. Setting a URL and clicking 'Update URL' will forward any incoming requests to the hub into the specified URL.

Configuration

RequestHub can create default hubs on startup. Simply create a YAML file with the appropriate hub names and forwarding urls, and pass it to the config option.

config.yml:

hubs:
  test-hub:
    forward_url: 'https://www.example.com/webhook'
  another-hub:
$ requesthub -config config.yml

Reload Config

Changes to the config YAML file can be reloaded on the fly by sending SIGHUP to the requesthub process.

Download Details:

Author: kyledayton
Source Code: https://github.com/kyledayton/requesthub 
License: MIT license

#go #golang #log #proxy #http 

Requesthub: Receive, Log, and Proxy HTTP Requests
Gordon  Taylor

Gordon Taylor

1660678500

Proxies Nodejs Require in Order to Allow Overriding Dependencies

proxyquire

Proxies nodejs's require in order to make overriding dependencies during testing easy while staying totally unobtrusive.

If you want to stub dependencies for your client side modules, try proxyquireify, a proxyquire for browserify v2 or proxyquire-universal to test in both Node and the browser.

Features

  • no changes to your code are necessary
  • non overridden methods of a module behave like the original
  • mocking framework agnostic, if it can stub a function then it works with proxyquire
  • "use strict" compliant

Example

foo.js:

var path = require('path');

module.exports.extnameAllCaps = function (file) {
  return path.extname(file).toUpperCase();
};

module.exports.basenameAllCaps = function (file) {
  return path.basename(file).toUpperCase();
};

foo.test.js:

var proxyquire =  require('proxyquire')
  , assert     =  require('assert')
  , pathStub   =  { };

// when no overrides are specified, path.extname behaves normally
var foo = proxyquire('./foo', { 'path': pathStub });
assert.strictEqual(foo.extnameAllCaps('file.txt'), '.TXT');

// override path.extname
pathStub.extname = function (file) { return 'Exterminate, exterminate the ' + file; };

// path.extname now behaves as we told it to
assert.strictEqual(foo.extnameAllCaps('file.txt'), 'EXTERMINATE, EXTERMINATE THE FILE.TXT');

// path.basename and all other path module methods still function as before
assert.strictEqual(foo.basenameAllCaps('/a/b/file.txt'), 'FILE.TXT');

You can also replace functions directly:

get.js:

var get    = require('simple-get');
var assert = require('assert');

module.exports = function fetch (callback) {
  get('https://api/users', callback);
};

get.test.js:

var proxyquire = require('proxyquire').noCallThru();
var assert = require('assert');

var fetch = proxyquire('./get', {
  'simple-get': function (url, callback) {
    process.nextTick(function () {
      callback(null, { statusCode: 200 })
    })
  }
});

fetch(function (err, res) {
  assert(res.statusCode, 200)
});

Usage

Two simple steps to override require in your tests:

  • add var proxyquire = require('proxyquire'); to top level of your test file
  • proxyquire(...) the module you want to test and pass along stubs for modules you want to override

API

proxyquire({string} request, {Object} stubs)

  • request: path to the module to be tested e.g., ../lib/foo
  • stubs: key/value pairs of the form { modulePath: stub, ... }
    • module paths are relative to the tested module not the test file
    • therefore specify it exactly as in the require statement inside the tested file
    • values themselves are key/value pairs of functions/properties and the appropriate override

Preventing call thru to original dependency

By default proxyquire calls the function defined on the original dependency whenever it is not found on the stub.

If you prefer a more strict behavior you can prevent callThru on a per module or contextual basis. If your stub is a class or class instance rather than a plain object, you should disable callThru to ensure that it is passed through with the correct prototype.

class MockClass {
  static '@noCallThru' = true;
}

var foo = proxyquire('./foo', {
  './my-class': MockClass
});
class MockClass {
  get '@noCallThru'() {
    return true;
  }
}

var foo = proxyquire('./foo', {
  './my-class-instance': new MockClass()
});

If callThru is disabled, you can stub out modules that don't even exist on the machine that your tests are running on. While I wouldn't recommend this in general, I have seen cases where it is legitimately useful (e.g., when requiring global environment configs in json format that may not be available on all machines).

Prevent call thru on path stub:

var foo = proxyquire('./foo', {
  path: {
      extname: function (file) { ... }
    , '@noCallThru': true
  }
});

Prevent call thru for all future stubs resolved by a proxyquire instance

// all stubs resolved by proxyquireStrict will not call through by default
var proxyquireStrict = require('proxyquire').noCallThru();

// all stubs resolved by proxyquireNonStrict will call through by default
var proxyquireNonStrict = require('proxyquire');

Re-enable call thru for all future stubs resolved by a proxyquire instance

proxyquire.callThru();

Call thru configurations per module override callThru():

Passing @noCallThru: false when configuring modules will override noCallThru():

var foo = proxyquire
    .noCallThru()
    .load('./foo', {

        // no calls to original './bar' methods will be made
        './bar' : { toAtm: function (val) { ... } }

        // for 'path' module they will be made
      , path: {
          extname: function (file) { ... }
        , '@noCallThru': false
        }
    });

All together, now

var proxyquire = require('proxyquire').noCallThru();

// all methods for foo's dependencies will have to be stubbed out since proxyquire will not call through
var foo = proxyquire('./foo', stubs);

proxyquire.callThru();

// only some methods for foo's dependencies will have to be stubbed out here since proxyquire will now call through
var foo2 = proxyquire('./foo', stubs);

Using proxyquire to simulate the absence of Modules

Some libraries may behave differently in the presence or absence of a package, for example:

var cluster;
try {
  cluster = require('cluster');
} catch(e) {
  // cluster module is not present.
  cluster = null
}
if (cluster) {
  // Then provide some functionality for a cluster-aware version of Node.js
} else {
  // and some alternative for a cluster-unaware version.
}

To exercise the second branch of the if statement, you can make proxyquire pretend the package isn't present by setting the stub for it to null. This works even if a cluster module is actually present.

var foo = proxyquire('./foo', { cluster: null });

Forcing proxyquire to reload modules

In most situations it is fine to have proxyquire behave exactly like nodejs require, i.e. modules that are loaded once get pulled from the cache the next time.

For some tests however you need to ensure that the module gets loaded fresh everytime, i.e. if that causes initializing some dependency or some module state.

For this purpose proxyquire exposes the noPreserveCache function.

// ensure we don't get any module from the cache, but to load it fresh every time
var proxyquire = require('proxyquire').noPreserveCache();

var foo1 = proxyquire('./foo', stubs);
var foo2 = proxyquire('./foo', stubs);
var foo3 = require('./foo');

// foo1, foo2 and foo3 are different instances of the same module
assert.notStrictEqual(foo1, foo2);
assert.notStrictEqual(foo1, foo3);

proxyquire.preserveCache allows you to restore the behavior to match nodejs's require again.

proxyquire.preserveCache();

var foo1 = proxyquire('./foo', stubs);
var foo2 = proxyquire('./foo', stubs);
var foo3 = require('./foo');

// foo1, foo2 and foo3 are the same instance
assert.strictEqual(foo1, foo2);
assert.strictEqual(foo1, foo3);

Globally override require

Use the @global property to override every require of a module, even transitively.

Caveat

You should think very hard about alternatives before using this feature. Why, because it's intrusive and as you'll see if you read on it changes the default behavior of module initialization which means that code runs differently during testing than it does normally.

Additionally it makes it harder to reason about how your tests work.

Yeah, we are mocking fs three levels down in bar, so that's why we have to set it up when testing foo

WAAAT???

If you write proper unit tests you should never have a need for this. So here are some techniques to consider:

  • test each module in isolation
  • make sure your modules are small enough and do only one thing
  • stub out dependencies directly instead of stubbing something inside your dependencies
  • if you are testing bar and bar calls foo.read and foo.read calls fs.readFile proceed as follows
    • do not stub out fs.readFile globally
    • instead stub out foo so you can control what foo.read returns without ever even hitting fs

OK, made it past the warnings and still feel like you need this? Read on then but you are on your own now, this is as far as I'll go ;)

Watch out for more warnings below.

Globally override require during module initialization

// foo.js
var bar = require('./bar');

module.exports = function() {
  bar();
}

// bar.js
var baz = require('./baz');

module.exports = function() {
  baz.method();
}

// baz.js
module.exports = {
  method: function() {
    console.info('hello');
  }
}

// test.js
var bazStub = {
  method: function() {
    console.info('goodbye');
  }
};
  
var stubs = {
  './baz': Object.assign(bazStub, {'@global': true}) 
};

var proxyquire = require('proxyquire');

var foo = proxyquire('./foo', stubs);
foo();  // 'goodbye' is printed to stdout

Be aware that when using global overrides any module initialization code will be re-executed for each require.

This is not normally the case since node.js caches the return value of require, however to make global overrides work , proxyquire bypasses the module cache. This may cause unexpected behaviour if a module's initialization causes side effects.

As an example consider this module which opens a file during its initialization:

var fs = require('fs')
  , C = require('C');

// will get executed twice
var file = fs.openSync('/tmp/foo.txt', 'w');

module.exports = function() {
  return new C(file);
};

The file at /tmp/foo.txt could be created and/or truncated more than once.

Why is proxyquire messing with my require cache?

Say you have a module, C, that you wish to stub. You require module A which contains require('B'). Module B in turn contains require('C'). If module B has already been required elsewhere then when module A receives the cached version of module B and proxyquire would have no opportunity to inject the stub for C.

Therefore when using the @global flag, proxyquire will bypass the require cache.

Globally override require during module runtime

Say you have a module that looks like this:

module.exports = function() {
  var d = require('d');
  d.method();
};

The invocation of require('d') will happen at runtime and not when the containing module is requested via require. If you want to globally override d above, use the @runtimeGlobal property:

var stubs = {
  'd': {
    method: function(val) {
      console.info('hello world');
    },
    '@runtimeGlobal': true
  }
};

This will cause module setup code to be re-excuted just like @global, but with the difference that it will happen every time the module is requested via require at runtime as no module will ever be cached.

This can cause subtle bugs so if you can guarantee that your modules will not vary their require behaviour at runtime, use @global instead.

Configuring proxyquire by setting stub properties

Even if you want to override a module that exports a function directly, you can still set special properties like @global. You can use a named function or assign your stub function to a variable to add properties:

function foo () {}
proxyquire('./bar', {
  foo: Object.assign(foo, {'@global': true})
});

And if your stub is in a separate module where module.exports = foo:

var foostub = require('../stubs/foostub');
proxyquire('bar', {
  foo: Object.assign(foostub, {'@global': true})
});

Backwards Compatibility for proxyquire v0.3.x

Compatibility mode with proxyquire v0.3.x has been removed.

You should update your code to use the newer API but if you can't, pin the version of proxyquire in your package.json file to ~0.6 in order to continue using the older style.

Examples

We are testing foo which depends on bar:

// bar.js module
module.exports = {
    toAtm: function (val) { return  0.986923267 * val; }
};

// foo.js module
// requires bar which we will stub out in tests
var bar = require('./bar');
[ ... ]

Tests:

// foo-test.js module which is one folder below foo.js (e.g., in ./tests/)

/*
 *   Option a) Resolve and override in one step:
 */
var foo = proxyquire('../foo', {
  './bar': { toAtm: function (val) { return 0; /* wonder what happens now */ } }
});

// [ .. run some tests .. ]

/*
 *   Option b) Resolve with empty stub and add overrides later
 */
var barStub = { };

var foo =  proxyquire('../foo', { './bar': barStub });

// Add override
barStub.toAtm = function (val) { return 0; /* wonder what happens now */ };

[ .. run some tests .. ]

// Change override
barStub.toAtm = function (val) { return -1 * val; /* or now */ };

[ .. run some tests .. ]

// Resolve foo and override multiple of its dependencies in one step - oh my!
var foo = proxyquire('./foo', {
    './bar' : {
      toAtm: function (val) { return 0; /* wonder what happens now */ }
    }
  , path    : {
      extname: function (file) { return 'exterminate the name of ' + file; }
    }
});

More Examples

For more examples look inside the examples folder or look through the tests

Specific Examples:

Download Details:

Author: Thlorenz
Source Code: https://github.com/thlorenz/proxyquire 
License: MIT license

#javascript #node #proxy 

Proxies Nodejs Require in Order to Allow Overriding Dependencies
Lawrence  Lesch

Lawrence Lesch

1659740640

Proxymise: Chainable Promise Proxy

Proxymise

Chainable Promise Proxy.

Lightweight ES6 Proxy for Promises with no additional dependencies. Proxymise allows for method and property chaining without need for intermediate then() or await for cleaner and simpler code.

Use

npm i proxymise
const proxymise = require('proxymise');

// Instead of thens
foo.then(value => value.bar())
  .then(value => value.baz())
  .then(value => value.qux)
  .then(value => console.log(value));

// Instead of awaits
const value1 = await foo;
const value2 = await value1.bar();
const value3 = await value2.baz();
const value4 = await value3.qux;
console.log(value4);

// Use proxymise
const value = await proxymise(foo).bar().baz().qux;
console.log(value);

Practical Examples

Performance

Proxymised benchmark with 10000 iterations is practically as performant as the non-proxymised one.

node test/benchmark.js 
with proxymise: 3907.582ms
without proxymise: 3762.375ms

Author: Kozhevnikov
Source Code: https://github.com/kozhevnikov/proxymise 
License: MIT license

#javascript #proxy 

Proxymise: Chainable Promise Proxy
Brook  Hudson

Brook Hudson

1659196020

A Ruby Gem That Detects Bots, Spiders, Crawlers and Replicants.

Voight-Kampff

Voight-Kampff relies on a user agent list for its detection. It can easily tell you if a request is coming from a crawler, spider or bot. This can be especially helpful in analytics such as page hit tracking.

Installation

gem install voight_kampff

Configuration

A JSON file is used to match user agent strings to a list of known bots.

If you'd like to use an updated list or make your own customizations, run rake voight_kampff:import_user_agents. This will download a crawler-user-agents.json file into the ./config directory.

Note: The pattern entries in the JSON file are evaluated as regular expressions.

Usage

There are three ways to use Voight-Kampff

Through Rack::Request such as in your Ruby on Rails controllers:
request.bot?

Through the VoightKampff module:
VoightKampff.bot? 'your user agent string'

Through a VoightKampff::Test instance:
VoightKampff::Test.new('your user agent string').bot?

All of the above examples accept human? and bot? methods. All of these methods will return true or false.

Upgrading to version 1.0

Version 1.0 uses a new source for a list of bot user agent strings since the old source was no longer maintained. This new source, unfortuately, does not include as much detail. Therefore the following methods have been deprecated:

  • #browser?
  • #checker?
  • #downloader?
  • #proxy?
  • #crawler?
  • #spam?

In general the #bot? command tends to include all of these and I'm sure it's unlikely that anybody was getting this granular with their bot checking. So I see it as a small price to pay for an open and up to date bot list.

Also, the gem no longer extends ActionDispatch::Request instead it extends Rack::Request which ActionDispatch::Request inherits from. This allows the same functionality for Rails while opening the gem up to other rack-based projects.

FAQ

Q: What's with the name?
A: It's the machine in Blade Runner that is used to test whether someone is a human or a replicant.

Q: I've found a bot that isn't being matched
A: The list is being pulled from github.com/monperrus/crawler-user-agents. If you'd like to have entries added to the list, please create a pull request with that project. Once that pull request is merged, feel free to create an issue here and I'll release a new gem version with the updated list. In the meantime you can always run rake voight_kampff:import_user_agents on your project to get that updated list.

Q: __Why don't you use the user agent list from ______________ If you know of a better source for a list of bot user agent strings, please create an issue and let me know. I'm open to switching to a better source or supporting multiple sources. There are others out there but I like the openness of monperrus' list.

Thanks

Thanks to github.com/monperrus/crawler-user-agents for providing an open and easily updatable list of bot user agents.

Contributing

PR without tests will not get merged, Make sure you write tests for api and rails app. Feel free to ask for help, if you do not know how to write a determined test.

Running Tests?

  • bundle install
  • bundle exec rspec

Author: biola
Source code: https://github.com/biola/Voight-Kampff
License: MIT license

#ruby #ruby-on-rails 

A Ruby Gem That Detects Bots, Spiders, Crawlers and Replicants.
Nat  Grady

Nat Grady

1658819220

Transparent Asynchronous Electron Remoting using IPC

electron-ipc-proxy

Transparent asynchronous remoting between renderer threads and the main thread using IPC.

Overview

Imagine you have a service which exists in your main (nodejs) thread and you want to access the service from one of your windows. By registering the service with electron-ipc-proxy, you will be able to create proxy objects in the browser window which behave as if they were calling the service directly. All communication happens asynchronously (unlike using electron remote) and so you won't freeze up your application.

Example

You have a class which implements "TodoList" communications with the server, and has the following interface:

interface TodoService {
    todos: Observable<Todo>;
    canAddTodos: Promise<boolean>;
    addTodo(user: string, description: string): Promise<void>;
    getTodosFor(user: string): Observable<Todo>;
}

You can make this service available to renderer threads by registering it with electron-ipc-proxy:

import { registerProxy } from 'electron-ipc-proxy'

const todoService = createTodoService(...)
registerProxy(todoService, serviceDescriptor)

And then access it from renderer threads:

import { createProxy } from 'electron-ipc-proxy'
import { Observable } from 'rxjs'

const todoService = createProxy(serviceDescriptor, Observable)

todoService.addTodo('frank', 'write the docs')
    .then(res => console.log('successfully added a todo'))
todoService.todos.subscribe(...)

What is this "serviceDescriptor" parameter? Service descriptors tell electron-ipc-proxy the shape of the object to be proxied and the name of a unique channel to communicate on, they're very simple:

import { ProxyPropertyType } from 'electron-ipc-proxy'

const todoServiceDescriptor = {
    channel: "todoService",
    properties: {
        todos: ProxyPropertyType.Value$,
        canAddTodos: ProxyPropertyType.Value,
        addTodo: ProxyPropertyType.Function,
        getTodosFor: ProxyPropertyType.Function$
    }
}

Notes

All Values and Functions will return promises on the renderer side, no matter how they have been defined on the source object. This is because communication happens asynchronously. For this reason it is recommended that you make them promises on the source object as well, so the interface is the same on both sides.

Use Value$ and Function$ when you want to expose or return an Observable stream across IPC.

Only plain objects can be passed between the 2 sides of the proxy, as the data is serialized to JSON, so no functions or prototypes will make it across to the other side.

Notice the second parameter of createProxy - Observable this is done so that the library itself does not need to take on a dependency to rxjs. You need to pass in the Observable constructor yourself if you want to consume Observable streams.

The channel specified must be unique and match on both sides of the proxy.

The packages exposes 2 entry points in the "main" and "browser" fields of package.json. "main" is for the main thread and "browser" is for the renderer thread.

See it working

git clone https://github.com/frankwallis/electron-ipc-proxy.git
cd electron-ipc-proxy
npm install
npm run example

Author: frankwallis
Source Code: https://github.com/frankwallis/electron-ipc-proxy 
License: MIT license

#electron #proxy

Transparent Asynchronous Electron Remoting using IPC
Rupert  Beatty

Rupert Beatty

1658568660

Laravel Proxy Package for Handling Sessions When Behind Load Balancers

Laravel Trusted Proxies

Setting a trusted proxy allows for correct URL generation, redirecting, session handling and logging in Laravel when behind a reverse proxy such as a load balancer or cache.

Installation

Laravel 5.5+ comes with this package. If you are using Laravel 5.5 or greater, you do not need to add this to your project separately.

Laravel 5.0 - 5.4

To install Trusted Proxy, use:

composer require fideloper/proxy:^3.3

Laravel 4

composer require fideloper/proxy:^2.0

Setup

Refer to the docs above for using Trusted Proxy in Laravel 5.5+. For Laravel 4.0 - 5.4, refer to the wiki.

What Does This Do?

Setting a trusted proxy allows for correct URL generation, redirecting, session handling and logging in Laravel when behind a reverse proxy.

This is useful if your web servers sit behind a load balancer (Nginx, HAProxy, Envoy, ELB/ALB, etc), HTTP cache (CloudFlare, Squid, Varnish, etc), or other intermediary (reverse) proxy.

How Does This Work?

Applications behind a reverse proxy typically read some HTTP headers such as X-Forwarded, X-Forwarded-For, X-Forwarded-Proto (and more) to know about the real end-client making an HTTP request.

If those headers were not set, then the application code would think every incoming HTTP request would be from the proxy.

Laravel (technically the Symfony HTTP base classes) have a concept of a "trusted proxy", where those X-Forwarded headers will only be used if the source IP address of the request is known. In other words, it only trusts those headers if the proxy is trusted.

This package creates an easier interface to that option. You can set the IP addresses of the proxies (that the application would see, so it may be a private network IP address), and the Symfony HTTP classes will know to use the X-Forwarded headers if an HTTP requets containing those headers was from the trusted proxy.

Why Does This Matter?

A very common load balancing approach is to send https:// requests to a load balancer, but send http:// requests to the application servers behind the load balancer.

For example, you may send a request in your browser to https://example.org. The load balancer, in turn, might send requests to an application server at http://192.168.1.23.

What if that server returns a redirect, or generates an asset url? The users's browser would get back a redirect or HTML that includes http://192.168.1.23 in it, which is clearly wrong.

What happens is that the application thinks its hostname is 192.168.1.23 and the schema is http://. It doesn't know that the end client used https://example.org for its web request.

So the application needs to know to read the X-Forwarded headers to get the correct request details (schema https://, host example.org).

Laravel/Symfony automatically reads those headers, but only if the trusted proxy configuration is set to "trust" the load balancer/reverse proxy.

Note: Many of us use hosted load balancers/proxies such as AWS ELB/ALB, etc. We don't know the IP address of those reverse proxies, and so you need to trusted all proxies in that case.

The trade-off there is running the security risk of allowing people to potentially spoof the X-Forwarded headers.

IP Addresses by Service

This Wiki page has a list of popular services and their IP addresses of their servers, if available. Any updates or suggestions are welcome!

Author: Fideloper
Source Code: https://github.com/fideloper/TrustedProxy 
License: MIT license

#laravel #proxy #load 

Laravel Proxy Package for Handling Sessions When Behind Load Balancers
Nat  Grady

Nat Grady

1657463400

James: Web Debugging Proxy Application

James

James is an HTTP Proxy and Monitor that enables developers to view and intercept requests made from the browser. It is an open-source alternative to the popular developer tool Charles

James is built with hoxy, electron and react

Installing

Download the correct version for your OS and run

All platforms download

Features

Wildcard URL Mappings

To use wildcards in the "url to map" field, put a "*" between two adjacent slashes. For example:

http://foo.com/version/*/app.js -> http://localhost:8000/app.js

Requests which will be redirected:

  • http://foo.com/version/1/app.js
  • http://foo.com/version/26.8/app.js
  • http://foo.com/version/spaghetti/app.js

Requests which will not be redirected:

  • http://foo.com/version/app.js
  • http://bar.com/version/1/app.js

You can also use multiple wildcards in the same URL.

HTTPS Proxying

To enable HTTPS support follow the instructions in our wiki

Contributing

Feel free to open pull requests and issues! If you need inspiration, take a look in the issue section.

Setting up a development environment

The electron instance will automatically reload whenever a change is made

  1. Clone the repository
  2. npm install
  3. npm start

Other useful npm commands

  • npm test: Runs all tests
  • npm run build: Completely builds the app (no watch)
  • npm run lint: Checks all JS code against defined code styling rules
  • npm run release: Creates a standalone app bundle for all operating systems

Guidelines

  • Make sure that no tests are failing
  • Always add tests for new features
  • Make sure that there are no linting errors in your code (use npm run lint)

Communication

We're using Matrix for communication, and you can use the Vector.im client to join the room. (If it doesn't load when you click "join", refresh the page).

Contributors

  • @davidneat
  • @klipstein
  • @mitchhentges
  • @nerdbeere
  • @tomitm

This project is in maintenance mode

Maintainers or forks welcome: the original James team aren't able to spend the same amount of time on James anymore.

We suggest looking at HTTP Toolkit as an actively maintained open-source alternative.


Author: james-proxy
Source Code: https://github.com/james-proxy/james 
License: MIT License

#electron #http #proxy #javascript 

James: Web Debugging Proxy Application

Toxiproxy: A TCP Proxy To Simulate Network and System Conditions

Toxiproxy

Toxiproxy is a framework for simulating network conditions. It's made specifically to work in testing, CI and development environments, supporting deterministic tampering with connections, but with support for randomized chaos and customization. Toxiproxy is the tool you need to prove with tests that your application doesn't have single points of failure. We've been successfully using it in all development and test environments at Shopify since October, 2014. See our blog post on resiliency for more information.

Toxiproxy usage consists of two parts. A TCP proxy written in Go (what this repository contains) and a client communicating with the proxy over HTTP. You configure your application to make all test connections go through Toxiproxy and can then manipulate their health via HTTP. See Usage below on how to set up your project.

For example, to add 1000ms of latency to the response of MySQL from the Ruby client:

Toxiproxy[:mysql_master].downstream(:latency, latency: 1000).apply do
  Shop.first # this takes at least 1s
end

To take down all Redis instances:

Toxiproxy[/redis/].down do
  Shop.first # this will throw an exception
end

While the examples in this README are currently in Ruby, there's nothing stopping you from creating a client in any other language (see Clients).

Why yet another chaotic TCP proxy?

The existing ones we found didn't provide the kind of dynamic API we needed for integration and unit testing. Linux tools like nc and so on are not cross-platform and require root, which makes them problematic in test, development and CI environments.

Clients

Example

Let's walk through an example with a Rails application. Note that Toxiproxy is in no way tied to Ruby, it's just been our first use case. You can see the full example at sirupsen/toxiproxy-rails-example. To get started right away, jump down to Usage.

For our popular blog, for some reason we're storing the tags for our posts in Redis and the posts themselves in MySQL. We might have a Post class that includes some methods to manipulate tags in a Redis set:

class Post < ActiveRecord::Base
  # Return an Array of all the tags.
  def tags
    TagRedis.smembers(tag_key)
  end

  # Add a tag to the post.
  def add_tag(tag)
    TagRedis.sadd(tag_key, tag)
  end

  # Remove a tag from the post.
  def remove_tag(tag)
    TagRedis.srem(tag_key, tag)
  end

  # Return the key in Redis for the set of tags for the post.
  def tag_key
    "post:tags:#{self.id}"
  end
end

We've decided that erroring while writing to the tag data store (adding/removing) is OK. However, if the tag data store is down, we should be able to see the post with no tags. We could simply rescue the Redis::CannotConnectError around the SMEMBERS Redis call in the tags method. Let's use Toxiproxy to test that.

Since we've already installed Toxiproxy and it's running on our machine, we can skip to step 2. This is where we need to make sure Toxiproxy has a mapping for Redis tags. To config/boot.rb (before any connection is made) we add:

require 'toxiproxy'

Toxiproxy.populate([
  {
    name: "toxiproxy_test_redis_tags",
    listen: "127.0.0.1:22222",
    upstream: "127.0.0.1:6379"
  }
])

Then in config/environments/test.rb we set the TagRedis to be a Redis client that connects to Redis through Toxiproxy by adding this line:

TagRedis = Redis.new(port: 22222)

All calls in the test environment now go through Toxiproxy. That means we can add a unit test where we simulate a failure:

test "should return empty array when tag redis is down when listing tags" do
  @post.add_tag "mammals"

  # Take down all Redises in Toxiproxy
  Toxiproxy[/redis/].down do
    assert_equal [], @post.tags
  end
end

The test fails with Redis::CannotConnectError. Perfect! Toxiproxy took down the Redis successfully for the duration of the closure. Let's fix the tags method to be resilient:

def tags
  TagRedis.smembers(tag_key)
rescue Redis::CannotConnectError
  []
end

The tests pass! We now have a unit test that proves fetching the tags when Redis is down returns an empty array, instead of throwing an exception. For full coverage you should also write an integration test that wraps fetching the entire blog post page when Redis is down.

Full example application is at sirupsen/toxiproxy-rails-example.

Usage

Configuring a project to use Toxiproxy consists of three steps:

  1. Installing Toxiproxy
  2. Populating Toxiproxy
  3. Using Toxiproxy

1. Installing Toxiproxy

Linux

See Releases for the latest binaries and system packages for your architecture.

Ubuntu

$ wget -O toxiproxy-2.1.4.deb https://github.com/Shopify/toxiproxy/releases/download/v2.1.4/toxiproxy_2.1.4_amd64.deb
$ sudo dpkg -i toxiproxy-2.1.4.deb
$ sudo service toxiproxy start

OS X

With Homebrew:

$ brew tap shopify/shopify
$ brew install toxiproxy

Or with MacPorts:

$ port install toxiproxy

Windows

Toxiproxy for Windows is available for download at https://github.com/Shopify/toxiproxy/releases/download/v2.1.4/toxiproxy-server-windows-amd64.exe

Docker

Toxiproxy is available on Github container registry. Old versions <= 2.1.4 are available on on Docker Hub.

$ docker pull ghcr.io/shopify/toxiproxy
$ docker run --rm -it ghcr.io/shopify/toxiproxy

If using Toxiproxy from the host rather than other containers, enable host networking with --net=host.

$ docker run --rm --entrypoint="/toxiproxy-cli" -it ghcr.io/shopify/toxiproxy list

Source

If you have Go installed, you can build Toxiproxy from source using the make file:

$ make build
$ ./toxiproxy-server

Upgrading from Toxiproxy 1.x

In Toxiproxy 2.0 several changes were made to the API that make it incompatible with version 1.x. In order to use version 2.x of the Toxiproxy server, you will need to make sure your client library supports the same version. You can check which version of Toxiproxy you are running by looking at the /version endpoint.

See the documentation for your client library for specific library changes. Detailed changes for the Toxiproxy server can been found in CHANGELOG.md.

2. Populating Toxiproxy

When your application boots, it needs to make sure that Toxiproxy knows which endpoints to proxy where. The main parameters are: name, address for Toxiproxy to listen on and the address of the upstream.

Some client libraries have helpers for this task, which is essentially just making sure each proxy in a list is created. Example from the Ruby client:

# Make sure `shopify_test_redis_master` and `shopify_test_mysql_master` are
# present in Toxiproxy
Toxiproxy.populate([
  {
    name: "shopify_test_redis_master",
    listen: "127.0.0.1:22220",
    upstream: "127.0.0.1:6379"
  },
  {
    name: "shopify_test_mysql_master",
    listen: "127.0.0.1:24220",
    upstream: "127.0.0.1:3306"
  }
])

This code needs to run as early in boot as possible, before any code establishes a connection through Toxiproxy. Please check your client library for documentation on the population helpers.

Alternatively use the CLI to create proxies, e.g.:

toxiproxy-cli create -l localhost:26379 -u localhost:6379 shopify_test_redis_master

We recommend a naming such as the above: <app>_<env>_<data store>_<shard>. This makes sure there are no clashes between applications using the same Toxiproxy.

For large application we recommend storing the Toxiproxy configurations in a separate configuration file. We use config/toxiproxy.json. This file can be passed to the server using the -config option, or loaded by the application to use with the populate function.

An example config/toxiproxy.json:

[
  {
    "name": "web_dev_frontend_1",
    "listen": "[::]:18080",
    "upstream": "webapp.domain:8080",
    "enabled": true
  },
  {
    "name": "web_dev_mysql_1",
    "listen": "[::]:13306",
    "upstream": "database.domain:3306",
    "enabled": true
  }
]

Use ports outside the ephemeral port range to avoid random port conflicts. It's 32,768 to 61,000 on Linux by default, see /proc/sys/net/ipv4/ip_local_port_range.

3. Using Toxiproxy

To use Toxiproxy, you now need to configure your application to connect through Toxiproxy. Continuing with our example from step two, we can configure our Redis client to connect through Toxiproxy:

# old straight to redis
redis = Redis.new(port: 6380)

# new through toxiproxy
redis = Redis.new(port: 22220)

Now you can tamper with it through the Toxiproxy API. In Ruby:

redis = Redis.new(port: 22220)

Toxiproxy[:shopify_test_redis_master].downstream(:latency, latency: 1000).apply do
  redis.get("test") # will take 1s
end

Or via the CLI:

toxiproxy-cli toxic add -t latency -a latency=1000 shopify_test_redis_master

Please consult your respective client library on usage.

4. Logging

There are the following log levels: panic, fatal, error, warn or warning, info, debug and trace. The level could be updated via environment variable LOG_LEVEL.

Toxics

Toxics manipulate the pipe between the client and upstream. They can be added and removed from proxies using the HTTP api. Each toxic has its own parameters to change how it affects the proxy links.

For documentation on implementing custom toxics, see CREATING_TOXICS.md

latency

Add a delay to all data going through the proxy. The delay is equal to latency +/- jitter.

Attributes:

  • latency: time in milliseconds
  • jitter: time in milliseconds

down

Bringing a service down is not technically a toxic in the implementation of Toxiproxy. This is done by POSTing to /proxies/{proxy} and setting the enabled field to false.

bandwidth

Limit a connection to a maximum number of kilobytes per second.

Attributes:

  • rate: rate in KB/s

slow_close

Delay the TCP socket from closing until delay has elapsed.

Attributes:

  • delay: time in milliseconds

timeout

Stops all data from getting through, and closes the connection after timeout. If timeout is 0, the connection won't close, and data will be delayed until the toxic is removed.

Attributes:

  • timeout: time in milliseconds

reset_peer

Simulate TCP RESET (Connection reset by peer) on the connections by closing the stub Input immediately or after a timeout.

Attributes:

  • timeout: time in milliseconds

slicer

Slices TCP data up into small bits, optionally adding a delay between each sliced "packet".

Attributes:

  • average_size: size in bytes of an average packet
  • size_variation: variation in bytes of an average packet (should be smaller than average_size)
  • delay: time in microseconds to delay each packet by

limit_data

Closes connection when transmitted data exceeded limit.

  • bytes: number of bytes it should transmit before connection is closed

HTTP API

All communication with the Toxiproxy daemon from the client happens through the HTTP interface, which is described here.

Toxiproxy listens for HTTP on port 8474.

Proxy fields:

  • name: proxy name (string)
  • listen: listen address (string)
  • upstream: proxy upstream address (string)
  • enabled: true/false (defaults to true on creation)

To change a proxy's name, it must be deleted and recreated.

Changing the listen or upstream fields will restart the proxy and drop any active connections.

If listen is specified with a port of 0, toxiproxy will pick an ephemeral port. The listen field in the response will be updated with the actual port.

If you change enabled to false, it will take down the proxy. You can switch it back to true to reenable it.

Toxic fields:

  • name: toxic name (string, defaults to <type>_<stream>)
  • type: toxic type (string)
  • stream: link direction to affect (defaults to downstream)
  • toxicity: probability of the toxic being applied to a link (defaults to 1.0, 100%)
  • attributes: a map of toxic-specific attributes

See Toxics for toxic-specific attributes.

The stream direction must be either upstream or downstream. upstream applies the toxic on the client -> server connection, while downstream applies the toxic on the server -> client connection. This can be used to modify requests and responses separately.

Endpoints

All endpoints are JSON.

  • GET /proxies - List existing proxies and their toxics
  • POST /proxies - Create a new proxy
  • POST /populate - Create or replace a list of proxies
  • GET /proxies/{proxy} - Show the proxy with all its active toxics
  • POST /proxies/{proxy} - Update a proxy's fields
  • DELETE /proxies/{proxy} - Delete an existing proxy
  • GET /proxies/{proxy}/toxics - List active toxics
  • POST /proxies/{proxy}/toxics - Create a new toxic
  • GET /proxies/{proxy}/toxics/{toxic} - Get an active toxic's fields
  • POST /proxies/{proxy}/toxics/{toxic} - Update an active toxic
  • DELETE /proxies/{proxy}/toxics/{toxic} - Remove an active toxic
  • POST /reset - Enable all proxies and remove all active toxics
  • GET /version - Returns the server version number
  • GET /metrics - Returns Prometheus-compatible metrics

Populating Proxies

Proxies can be added and configured in bulk using the /populate endpoint. This is done by passing a json array of proxies to toxiproxy. If a proxy with the same name already exists, it will be compared to the new proxy and replaced if the upstream and listen address don't match.

A /populate call can be included for example at application start to ensure all required proxies exist. It is safe to make this call several times, since proxies will be untouched as long as their fields are consistent with the new data.

CLI Example

$ toxiproxy-cli create -l localhost:26379 -u localhost:6379 redis
Created new proxy redis
$ toxiproxy-cli list
Listen          Upstream        Name  Enabled Toxics
======================================================================
127.0.0.1:26379 localhost:6379  redis true    None

Hint: inspect toxics with `toxiproxy-client inspect <proxyName>`
$ redis-cli -p 26379
127.0.0.1:26379> SET omg pandas
OK
127.0.0.1:26379> GET omg
"pandas"
$ toxiproxy-cli toxic add -t latency -a latency=1000 redis
Added downstream latency toxic 'latency_downstream' on proxy 'redis'
$ redis-cli -p 26379
127.0.0.1:26379> GET omg
"pandas"
(1.00s)
127.0.0.1:26379> DEL omg
(integer) 1
(1.00s)
$ toxiproxy-cli toxic remove -n latency_downstream redis
Removed toxic 'latency_downstream' on proxy 'redis'
$ redis-cli -p 26379
127.0.0.1:26379> GET omg
(nil)
$ toxiproxy-cli delete redis
Deleted proxy redis
$ redis-cli -p 26379
Could not connect to Redis at 127.0.0.1:26379: Connection refused

Metrics

Toxiproxy exposes Prometheus-compatible metrics via its HTTP API at /metrics. See METRICS.md for full descriptions

Frequently Asked Questions

How fast is Toxiproxy? The speed of Toxiproxy depends largely on your hardware, but you can expect a latency of < 100µs when no toxics are enabled. When running with GOMAXPROCS=4 on a Macbook Pro we achieved ~1000MB/s throughput, and as high as 2400MB/s on a higher end desktop. Basically, you can expect Toxiproxy to move data around at least as fast the app you're testing.

Can Toxiproxy do randomized testing? Many of the available toxics can be configured to have randomness, such as jitter in the latency toxic. There is also a global toxicity parameter that specifies the percentage of connections a toxic will affect. This is most useful for things like the timeout toxic, which would allow X% of connections to timeout.

I am not seeing my Toxiproxy actions reflected for MySQL. MySQL will prefer the local Unix domain socket for some clients, no matter which port you pass it if the host is set to localhost. Configure your MySQL server to not create a socket, and use 127.0.0.1 as the host. Remember to remove the old socket after you restart the server.

Toxiproxy causes intermittent connection failures. Use ports outside the ephemeral port range to avoid random port conflicts. It's 32,768 to 61,000 on Linux by default, see /proc/sys/net/ipv4/ip_local_port_range.

Should I run a Toxiproxy for each application? No, we recommend using the same Toxiproxy for all applications. To distinguish between services we recommend naming your proxies with the scheme: <app>_<env>_<data store>_<shard>. For example, shopify_test_redis_master or shopify_development_mysql_1.

Development

  • make. Build a toxiproxy development binary for the current platform.
  • make all. Build Toxiproxy binaries and packages for all platforms. Requires to have Go compiled with cross compilation enabled on Linux and Darwin (amd64) as well as goreleaser in your $PATH to build binaries the Linux package.
  • make test. Run the Toxiproxy tests.

Release

See RELEASE.md

Author: Shopify
Source Code: https://github.com/shopify/toxiproxy 
License: MIT license

#go #golang #testing #proxy #chaos 

Toxiproxy: A TCP Proxy To Simulate Network and System Conditions
Waylon  Bruen

Waylon Bruen

1654668840

A Simple Implementation Of Some SSH Protocol Features in Go

easyssh-proxy    

easyssh-proxy provides a simple implementation of some SSH protocol features in Go.

Feature

This project is forked from easyssh but add some features as the following.

  •  Support plain text of user private key.
  •  Support key path of user private key.
  •  Support Timeout for the TCP connection to establish.
  •  Support SSH ProxyCommand.
     +--------+       +----------+      +-----------+
     | Laptop | <-->  | Jumphost | <--> | FooServer |
     +--------+       +----------+      +-----------+

                         OR

     +--------+       +----------+      +-----------+
     | Laptop | <-->  | Firewall | <--> | FooServer |
     +--------+       +----------+      +-----------+
     192.168.1.5       121.1.2.3         10.10.29.68

Usage

You can see ssh, scp, ProxyCommand on examples folder.

ssh

See example/ssh/ssh.go

package main

import (
    "fmt"
    "time"

    "github.com/appleboy/easyssh-proxy"
)

func main() {
    // Create MakeConfig instance with remote username, server address and path to private key.
    ssh := &easyssh.MakeConfig{
        User:   "appleboy",
        Server: "example.com",
        // Optional key or Password without either we try to contact your agent SOCKET
        // Password: "password",
        // Paste your source content of private key
        // Key: `-----BEGIN RSA PRIVATE KEY-----
        // MIIEpAIBAAKCAQEA4e2D/qPN08pzTac+a8ZmlP1ziJOXk45CynMPtva0rtK/RB26
        // 7XC9wlRna4b3Ln8ew3q1ZcBjXwD4ppbTlmwAfQIaZTGJUgQbdsO9YA==
        // -----END RSA PRIVATE KEY-----
        // `,
        KeyPath: "/Users/username/.ssh/id_rsa",
        Port:    "22",
        Timeout: 60 * time.Second,

        // Parse PrivateKey With Passphrase
        Passphrase: "1234",

        // Optional fingerprint SHA256 verification
        // Get Fingerprint: ssh.FingerprintSHA256(key)
        // Fingerprint: "SHA256:mVPwvezndPv/ARoIadVY98vAC0g+P/5633yTC4d/wXE"

        // Enable the use of insecure ciphers and key exchange methods.
        // This enables the use of the the following insecure ciphers and key exchange methods:
        // - aes128-cbc
        // - aes192-cbc
        // - aes256-cbc
        // - 3des-cbc
        // - diffie-hellman-group-exchange-sha256
        // - diffie-hellman-group-exchange-sha1
        // Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
        // UseInsecureCipher: true,
    }

    // Call Run method with command you want to run on remote server.
    stdout, stderr, done, err := ssh.Run("ls -al", 60*time.Second)
    // Handle errors
    if err != nil {
        panic("Can't run remote command: " + err.Error())
    } else {
        fmt.Println("don is :", done, "stdout is :", stdout, ";   stderr is :", stderr)
    }
}

scp

See example/scp/scp.go

package main

import (
    "fmt"

    "github.com/appleboy/easyssh-proxy"
)

func main() {
    // Create MakeConfig instance with remote username, server address and path to private key.
    ssh := &easyssh.MakeConfig{
        User:     "appleboy",
        Server:   "example.com",
        Password: "123qwe",
        Port:     "22",
    }

    // Call Scp method with file you want to upload to remote server.
    // Please make sure the `tmp` floder exists.
    err := ssh.Scp("/root/source.csv", "/tmp/target.csv")

    // Handle errors
    if err != nil {
        panic("Can't run remote command: " + err.Error())
    } else {
        fmt.Println("success")
    }
}

SSH ProxyCommand

See example/proxy/proxy.go

    ssh := &easyssh.MakeConfig{
        User:    "drone-scp",
        Server:  "localhost",
        Port:    "22",
        KeyPath: "./tests/.ssh/id_rsa",
        Proxy: easyssh.DefaultConfig{
            User:    "drone-scp",
            Server:  "localhost",
            Port:    "22",
            KeyPath: "./tests/.ssh/id_rsa",
        },
    }

SSH Stream Log

See example/stream/stream.go

func main() {
    // Create MakeConfig instance with remote username, server address and path to private key.
    ssh := &easyssh.MakeConfig{
        Server:  "localhost",
        User:    "drone-scp",
        KeyPath: "./tests/.ssh/id_rsa",
        Port:    "22",
        Timeout: 60 * time.Second,
    }

    // Call Run method with command you want to run on remote server.
    stdoutChan, stderrChan, doneChan, errChan, err := ssh.Stream("for i in {1..5}; do echo ${i}; sleep 1; done; exit 2;", 60*time.Second)
    // Handle errors
    if err != nil {
        panic("Can't run remote command: " + err.Error())
    } else {
        // read from the output channel until the done signal is passed
        isTimeout := true
    loop:
        for {
            select {
            case isTimeout = <-doneChan:
                break loop
            case outline := <-stdoutChan:
                fmt.Println("out:", outline)
            case errline := <-stderrChan:
                fmt.Println("err:", errline)
            case err = <-errChan:
            }
        }

        // get exit code or command error.
        if err != nil {
            fmt.Println("err: " + err.Error())
        }

        // command time out
        if !isTimeout {
            fmt.Println("Error: command timeout")
        }
    }
}

Author: Appleboy
Source Code: https://github.com/appleboy/easyssh-proxy 
License: MIT license

#go #golang #ssh #proxy 

A Simple Implementation Of Some SSH Protocol Features in Go

How To Secure A Linux Server

An evolving how-to guide for securing a Linux server that, hopefully, also teaches you a little about security and why it matters.

Introduction

Guide Objective

This guides purpose is to teach you how to secure a Linux server.

There are a lot of things you can do to secure a Linux server and this guide will attempt to cover as many of them as possible. More topics/material will be added as I learn, or as folks contribute.

(Table of Contents)

Why Secure Your Server

I assume you're using this guide because you, hopefully, already understand why good security is important. That is a heavy topic onto itself and breaking it down is out-of-scope for this guide. If you don't know the answer to that question, I advise you research it first.

At a high level, the second a device, like a server, is in the public domain -- i.e visible to the outside world -- it becomes a target for bad-actors. An unsecured device is a playground for bad-actors who want access to your data, or to use your server as another node for their large-scale DDOS attacks.

What's worse is, without good security, you may never know if your server has been compromised. A bad-actor may have gained unauthorized access to your server and copied your data without changing anything so you'd never know. Or your server may have been part of a DDOS attack and you wouldn't know. Look at many of the large scale data breaches in the news -- the companies often did not discover the data leak or intrusion until long after the bad-actors were gone.

Contrary to popular belief, bad-actors don't always want to change something or lock you out of your data for money. Sometimes they just want the data on your server for their data warehouses (there is big money in big data) or to covertly use your server for their nefarious purposes.

(Table of Contents)

Why Yet Another Guide

This guide may appear duplicative/unnecessary because there are countless articles online that tell you how to secure Linux, but the information is spread across different articles, that cover different things, and in different ways. Who has time to scour through hundreds of articles?

As I was going through research for my Debian build, I kept notes. At the end I realized that, along with what I already knew, and what I was learning, I had the makings of a how-to guide. I figured I'd put it online to hopefully help others learn, and save time.

I've never found one guide that covers everything -- this guide is my attempt.

Many of the things covered in this guide may be rather basic/trivial, but most of us do not install Linux every day and it is easy to forget those basic things.

IT automation tools like Ansible, Chef, Jenkins, Puppet, etc. help with the tedious task of installing/configuring a server but IMHO they are better suited for multiple or large scale deployments. IMHO, the overhead required to use those kinds of automation tools is wholly unnecessary for a one-time single server install for home use.

Other Guides

There are many guides provided by experts, industry leaders, and the distributions themselves. It is not practical, and sometimes against copyright, to include everything from those guides. I recommend you check them out before starting with this guide.

(Table of Contents)

To Do / To Add

Guide Overview

About This Guide

This guide...

  • ...is a work in progress.
  • ...is focused on at-home Linux servers. All of the concepts/recommendations here apply to larger/professional environments but those use-cases call for more advanced and specialized configurations that are out-of-scope for this guide.
  • ...does not teach you about Linux, how to install Linux, or how to use it. Check https://linuxjourney.com/ if you're new to Linux.
  • ...is meant to be Linux distribution agnostic.
  • ...does not teach you everything you need to know about security nor does it get into all aspects of system/server security. For example, physical security is out of scope for this guide.
  • ...does not talk about how programs/tools work, nor does it delve into their nook and crannies. Most of the programs/tools this guide references are very powerful and highly configurable. The goal is to cover the bare necessities -- enough to whet your appetite and make you hungry enough to want to go and learn more.
  • ...aims to make it easy by providing code you can copy-and-paste. You might need to modify the commands before you paste so keep your favorite text editor handy.
  • ...is organized in an order that makes logical sense to me -- i.e. securing SSH before installing a firewall. As such, this guide is intended to be followed in the order it is presented but it is not necessary to do so. Just be careful if you do things in a different order -- some sections require previous sections to be completed.

My Use-Case

There are many types of servers and different use-cases. While I want this guide to be as generic as possible, there will be some things that may not apply to all/other use-cases. Use your best judgement when going through this guide.

To help put context to many of the topics covered in this guide, my use-case/configuration is:

  • A desktop class computer...
  • With a single NIC...
  • Connected to a consumer grade router...
  • Getting a dynamic WAN IP provided by the ISP...
  • With WAN+LAN on IPV4...
  • And LAN using NAT...
  • That I want to be able to SSH to remotely from unknown computers and unknown locations (i.e. a friend's house).

Editing Configuration Files - For The Lazy

I am very lazy and do not like to edit files by hand if I don't need to. I also assume everyone else is just like me. :)

So, when and where possible, I have provided code snippets to quickly do what is needed, like add or change a line in a configuration file.

The code snippets use basic commands like echo, cat, sed, awk, and grep. How the code snippets work, like what each command/part does, is out of scope for this guide -- the man pages are your friend.

Note: The code snippets do not validate/verify the change went through -- i.e. the line was actually added or changed. I'll leave the verifying part in your capable hands. The steps in this guide do include taking backups of all files that will be changed.

Not all changes can be automated with code snippets. Those changes need good, old fashioned, manual editing. For example, you can't just append a line to an INI type file. Use your favorite Linux text editor.

Contributing

I wanted to put this guide on GitHub to make it easy to collaborate. The more folks that contribute, the better and more complete this guide will become.

To contribute you can fork and submit a pull request or submit a new issue.

Before You Start

Identify Your Principles

Before you start you will want to identify what your Principles are. What is your threat model? Some things to think about:

  • Why do you want to secure your server?
  • How much security do you want or not want?
  • How much convenience are you willing to compromise for security and vice-versa?
  • What are the threats you want to protect against? What are the specifics to your situation? For example:
    • Is physical access to your server/network a possible attack vector?
    • Will you be opening ports on your router so you can access your server from outside your home?
    • Will you be hosting a file share on your server that will be mounted on a desktop class machine? What is the possibility of the desktop machine getting infected and, in turn, infecting the server?
  • Do you have a means of recovering if your security implementation locks you out of your own server? For example, you disabled root login or password protected GRUB.

These are just a few things to think about. Before you start securing your server you will want to understand what you're trying to protect against and why so you know what you need to do.

Picking A Linux Distribution

This guide is intended to be distribution agnostic so users can use any distribution they want. With that said, there are a few things to keep in mind:

You want a distribution that...

  • ...is stable. Unless you like debugging issues at 2 AM, you don't want an unattended upgrade, or a manual package/system update, to render your server inoperable. But this also means you're okay with not running the latest, greatest, bleeding edge software.
  • ...stays up-to-date with security patches. You can secure everything on your server, but if the core OS or applications you're running have known vulnerabilities, you'll never be safe.
  • ...you're familiar with. If you don't know Linux, I would advise you play around with one before you try to secure it. You should be comfortable with it and know your way around, like how to install software, where configuration files are, etc...
  • ...is well supported. Even the most seasoned admin needs help every now and then. Having a place to go for help will save your sanity.

Installing Linux

Installing Linux is out-of-scope for this guide because each distribution does it differently and the installation instructions are usually well documented. If you need help, start with your distribution's documentation. Regardless of the distribution, the high-level process usually goes like so:

  1. download the ISO
  2. burn/copy/transfer it to your install medium (e.g. a CD or USB stick)
  3. boot your server from your install medium
  4. follow the prompts to install

Where applicable, use the expert install option so you have tighter control of what is running on your server. Only install what you absolutely need. I, personally, do not install anything other than SSH. Also, tick the Disk Encryption option.

Pre/Post Installation Requirements

  • If you're opening ports on your router so you can access your server from the outside, disable the port forwarding until your system is up and secured.
  • Unless you're doing everything physically connected to your server, you'll need remote access so be sure SSH works.
  • Keep your system up-to-date (i.e. sudo apt update && sudo apt upgrade on Debian based systems).
  • Make sure you perform any tasks specific to your setup like:
    • Configuring network
    • Configuring mount points in /etc/fstab
    • Creating the initial user accounts
    • Installing core software you'll want like man
    • Etc...
  • Your server will need to be able to send e-mails so you can get important security alerts. If you're not setting up a mail server check Gmail and Exim4 As MTA With Implicit TLS.
  • I would also recommend you go through the CIS Benchmarks before you start with this guide.

Other Important Notes

  • This guide is being written and tested on Debian. Most things below should work on other distributions. If you find something that does not, please contact me. The main thing that separates each distribution will be its package management system. Since I use Debian, I will provide the appropriate apt commands that should work on all Debian based distributions. If someone is willing to provide the respective commands for other distributions, I will add them.
  • File paths and settings also may differ slightly -- check with your distribution's documentation if you have issues.
  • Read the whole guide before you start. Your use-case and/or principals may call for not doing something or for changing the order.
  • Do not blindly copy-and-paste without understanding what you're pasting. Some commands will need to be modified for your needs before they'll work -- usernames for example.

The SSH Server

Important Note Before You Make SSH Changes

It is highly advised you keep a 2nd terminal open to your server before you make and apply SSH configuration changes. This way if you lock yourself out of your 1st terminal session, you still have one session connected so you can fix it.

Thank you to Sonnenbrand for this idea.

SSH Public/Private Keys

Why

Using SSH public/private keys is more secure than using a password. It also makes it easier and faster, to connect to our server because you don't have to enter a password.

How It Works

Check the references below for more details but, at a high level, public/private keys work by using a pair of keys to verify identity.

  1. One key, the public key, can only encrypt data, not decrypt it
  2. The other key, the private key, can decrypt the data

For SSH, a public and private key is created on the client. You want to keep both keys secure, especially the private key. Even though the public key is meant to be public, it is wise to make sure neither keys fall in the wrong hands.

When you connect to an SSH server, SSH will look for a public key that matches the client you're connecting from in the file ~/.ssh/authorized_keys on the server you're connecting to. Notice the file is in the home folder of the ID you're trying to connect to. So, after creating the public key, you need to append it to ~/.ssh/authorized_keys. One approach is to copy it to a USB stick and physically transfer it to the server. Another approach is to use use ssh-copy-id to transfer and append the public key.

After the keys have been created and the public key has been appended to ~/.ssh/authorized_keys on the host, SSH uses the public and private keys to verify identity and then establish a secure connection. How identity is verified is a complicated process but Digital Ocean has a very nice write-up of how it works. At a high level, identity is verified by the server encrypting a challenge message with the public key, then sending it to the client. If the client cannot decrypt the challenge message with the private key, the identity can't be verified and a connection will not be established.

They are considered more secure because you need the private key to establish an SSH connection. If you set PasswordAuthentication no in /etc/ssh/sshd_config, then SSH won't let you connect without the private key.

You can also set a pass-phrase for the keys which would require you to enter the key pass-phrase when connecting using public/private keys. Keep in mind doing this means you can't use the key for automation because you'll have no way to send the passphrase in your scripts. ssh-agent is a program that is shipped in many Linux distros (and usually already running) that will allow you to hold your unencrypted private key in memory for a configurable duration. Simply run ssh-add and it will prompt you for your passphrase. You will not be prompted for your passphrase again until the configurable duration has passed.

We will be using Ed25519 keys which, according to https://linux-audit.com/:

It is using an elliptic curve signature scheme, which offers better security than ECDSA and DSA. At the same time, it also has good performance.

Goals

  • Ed25519 public/private SSH keys:
    • private key on your client
    • public key on your server

Notes

  • You'll need to do this step for every computer and account you'll be connecting to your server from/as.

References

Steps

From the computer you're going to use to connect to your server, the client, not the server itself, create an Ed25519 key with ssh-keygen:

ssh-keygen -t ed25519
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/user/.ssh/id_ed25519):
Created directory '/home/user/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_ed25519.
Your public key has been saved in /home/user/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:F44D4dr2zoHqgj0i2iVIHQ32uk/Lx4P+raayEAQjlcs user@client
The key's randomart image is:
+--[ED25519 256]--+
|xxxx  x          |
|o.o +. .         |
| o o oo   .      |
|. E oo . o .     |
| o o. o S o      |
|... .. o o       |
|.+....+ o        |
|+.=++o.B..       |
|+..=**=o=.       |
+----[SHA256]-----+

Note: If you set a passphrase, you'll need to enter it every time you connect to your server using this key, unless you're using ssh-agent.

Now you need to append the public key ~/.ssh/id_ed25519.pub from your client to the ~/.ssh/authorized_keys file on your server. Since we're presumable still at home on the LAN, we're probably safe from MIM attacks, so we will use ssh-copy-id to transfer and append the public key:

ssh-copy-id user@server
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/user/.ssh/id_ed25519.pub"
The authenticity of host 'host (192.168.1.96)' can't be established.
ECDSA key fingerprint is SHA256:QaDQb/X0XyVlogh87sDXE7MR8YIK7ko4wS5hXjRySJE.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
user@host's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'user@host'"
and check to make sure that only the key(s) you wanted were added.

Now would be a good time to perform any tasks specific to your setup.

Create SSH Group For AllowGroups

Why

To make it easy to control who can SSH to the server. By using a group, we can quickly add/remove accounts to the group to quickly allow or not allow SSH access to the server.

How It Works

We will use the AllowGroups option in SSH's configuration file /etc/ssh/sshd_config to tell the SSH server to only allow users to SSH in if they are a member of a certain UNIX group. Anyone not in the group will not be able to SSH in.

Goals

Notes

References

  • man groupadd
  • man usermod

Steps

Create a group:

sudo groupadd sshusers

Add account(s) to the group:

sudo usermod -a -G sshusers user1
sudo usermod -a -G sshusers user2
sudo usermod -a -G sshusers ...

You'll need to do this for every account on your server that needs SSH access.

Secure /etc/ssh/sshd_config

Why

SSH is a door into your server. This is especially true if you are opening ports on your router so you can SSH to your server from outside your home network. If it is not secured properly, a bad-actor could use it to gain unauthorized access to your system.

How It Works

/etc/ssh/sshd_config is the default configuration file that the SSH server uses. We will use this file to tell what options the SSH server should use.

Goals

  • a secure SSH configuration

Notes

References

Steps

Make a backup of OpenSSH server's configuration file /etc/ssh/sshd_config and remove comments to make it easier to read:

sudo cp --archive /etc/ssh/sshd_config /etc/ssh/sshd_config-COPY-$(date +"%Y%m%d%H%M%S")
sudo sed -i -r -e '/^#|^$/ d' /etc/ssh/sshd_config

Edit /etc/ssh/sshd_config then find and edit or add these settings that should be applied regardless of your configuration/setup:

Note: SSH does not like duplicate contradicting settings. For example, if you have ChallengeResponseAuthentication no and then ChallengeResponseAuthentication yes, SSH will respect the first one and ignore the second. Your /etc/ssh/sshd_config file may already have some of the settings/lines below. To avoid issues you will need to manually go through your /etc/ssh/sshd_config file and address any duplicate contradicting settings.

########################################################################################################
# start settings from https://infosec.mozilla.org/guidelines/openssh#modern-openssh-67 as of 2019-01-01
########################################################################################################

# Supported HostKey algorithms by order of preference.
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com

# LogLevel VERBOSE logs user's key fingerprint on login. Needed to have a clear audit track of which key was using to log in.
LogLevel VERBOSE

# Use kernel sandbox mechanisms where possible in unprivileged processes
# Systrace on OpenBSD, Seccomp on Linux, seatbelt on MacOSX/Darwin, rlimit elsewhere.
# Note: This setting is deprecated in OpenSSH 7.5 (https://www.openssh.com/txt/release-7.5)
# UsePrivilegeSeparation sandbox

########################################################################################################
# end settings from https://infosec.mozilla.org/guidelines/openssh#modern-openssh-67 as of 2019-01-01
########################################################################################################

# don't let users set environment variables
PermitUserEnvironment no

# Log sftp level file access (read/write/etc.) that would not be easily logged otherwise.
Subsystem sftp  internal-sftp -f AUTHPRIV -l INFO

# only use the newer, more secure protocol
Protocol 2

# disable X11 forwarding as X11 is very insecure
# you really shouldn't be running X on a server anyway
X11Forwarding no

# disable port forwarding
AllowTcpForwarding no
AllowStreamLocalForwarding no
GatewayPorts no
PermitTunnel no

# don't allow login if the account has an empty password
PermitEmptyPasswords no

# ignore .rhosts and .shosts
IgnoreRhosts yes

# verify hostname matches IP
UseDNS yes

Compression no
TCPKeepAlive no
AllowAgentForwarding no
PermitRootLogin no

# don't allow .rhosts or /etc/hosts.equiv
HostbasedAuthentication no

Then find and edit or add these settings, and set values as per your requirements:

SettingValid ValuesExampleDescriptionNotes
AllowGroupslocal UNIX group nameAllowGroups sshusersgroup to allow SSH access to 
ClientAliveCountMaxnumberClientAliveCountMax 0maximum number of client alive messages sent without response 
ClientAliveIntervalnumber of secondsClientAliveInterval 300timeout in seconds before a response request 
ListenAddressspace separated list of local addresses
  • ListenAddress 0.0.0.0
  • ListenAddress 192.168.1.100
local addresses sshd should listen onSee Issue #1 for important details.
LoginGraceTimenumber of secondsLoginGraceTime 30time in seconds before login times-out 
MaxAuthTriesnumberMaxAuthTries 2maximum allowed attempts to login 
MaxSessionsnumberMaxSessions 2maximum number of open sessions 
MaxStartupsnumberMaxStartups 2maximum number of login sessions 
PasswordAuthenticationyes or noPasswordAuthentication noif login with a password is allowed 
Portany open/available port numberPort 22port that sshd should listen on 

Check man sshd_config for more details what these settings mean.

Make sure there are no duplicate settings that contradict each other. The below command should not have any output.

awk 'NF && $1!~/^(#|HostKey)/{print $1}' /etc/ssh/sshd_config | sort | uniq -c | grep -v ' 1 '

Restart ssh:

sudo service sshd restart

You can check verify the configurations worked with sshd -T and verify the output:

sudo sshd -T
port 22
addressfamily any
listenaddress [::]:22
listenaddress 0.0.0.0:22
usepam yes
logingracetime 30
x11displayoffset 10
maxauthtries 2
maxsessions 2
clientaliveinterval 300
clientalivecountmax 0
streamlocalbindmask 0177
permitrootlogin no
ignorerhosts yes
ignoreuserknownhosts no
hostbasedauthentication no
...
subsystem sftp internal-sftp -f AUTHPRIV -l INFO
maxstartups 2:30:2
permittunnel no
ipqos lowdelay throughput
rekeylimit 0 0
permitopen any

Remove Short Diffie-Hellman Keys

Why

Per Mozilla's OpenSSH guidelines for OpenSSH 6.7+, "all Diffie-Hellman moduli in use should be at least 3072-bit-long".

The Diffie-Hellman algorithm is used by SSH to establish a secure connection. The larger the moduli (key size) the stronger the encryption.

Goals

  • remove all Diffie-Hellman keys that are less than 3072 bits long

References

Steps

Make a backup of SSH's moduli file /etc/ssh/moduli:

sudo cp --archive /etc/ssh/moduli /etc/ssh/moduli-COPY-$(date +"%Y%m%d%H%M%S")

Remove short moduli:

sudo awk '$5 >= 3071' /etc/ssh/moduli | sudo tee /etc/ssh/moduli.tmp
sudo mv /etc/ssh/moduli.tmp /etc/ssh/moduli

2FA/MFA for SSH

Why

Even though SSH is a pretty good security guard for your doors and windows, it is still a visible door that bad-actors can see and try to brute-force in. Fail2ban will monitor for these brute-force attempts but there is no such thing as being too secure. Requiring two factors adds an extra layer of security.

Using Two Factor Authentication (2FA) / Multi Factor Authentication (MFA) requires anyone entering to have two keys to enter which makes it harder for bad actors. The two keys are:

  1. Their password
  2. A 6 digit token that changes every 30 seconds

Without both keys, they won't be able to get in.

Why Not

Many folks might find the experience cumbersome or annoying. And, access to your system is dependent on the accompanying authenticator app that generates the code.

How It Works

On Linux, PAM is responsible for authentication. There are four tasks to PAM that you can read about at https://en.wikipedia.org/wiki/Linux_PAM. This section talks about the authentication task.

When you log into a server, be it directly from the console or via SSH, the door you came through will send the request to the authentication task of PAM and PAM will ask for and verify your password. You can customize the rules each doors use. For example, you could have one set of rules when logging in directly from the console and another set of rules for when logging in via SSH.

This section will alter the authentication rules for when logging in via SSH to require both a password and a 6 digit code.

We will use Google's libpam-google-authenticator PAM module to create and verify a TOTP key. https://fastmail.blog/2016/07/22/how-totp-authenticator-apps-work/ and https://jemurai.com/2018/10/11/how-it-works-totp-based-mfa/ have very good writeups of how TOTP works.

What we will do is tell the server's SSH PAM configuration to ask the user for their password and then their numeric token. PAM will then verify the user's password and, if it is correct, then it will route the authentication request to libpam-google-authenticator which will ask for and verify your 6 digit token. If, and only if, everything is good will the authentication succeed and user be allowed to log in.

Goals

  • 2FA/MFA enabled for all SSH connections

Notes

  • Before you do this, you should have an idea of how 2FA/MFA works and you'll need an authenticator app on your phone to continue.
  • We'll use google-authenticator-libpam.
  • With the below configuration, a user will only need to enter their 2FA/MFA code if they are logging on with their password but not if they are using SSH public/private keys. Check the documentation on how to change this behavior to suite your requirements.

References

Steps

Install it libpam-google-authenticator.

On Debian based systems:

sudo apt install libpam-google-authenticator

Make sure you're logged in as the ID you want to enable 2FA/MFA for and execute google-authenticator to create the necessary token data:

google-authenticator
Do you want authentication tokens to be time-based (y/n) y
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/user@host%3Fsecret%3DR4ZWX34FQKZROVX7AGLJ64684Y%26issuer%3Dhost

...

Your new secret key is: R3NVX3FFQKZROVX7AGLJUGGESY
Your verification code is 751419
Your emergency scratch codes are:
  12345678
  90123456
  78901234
  56789012
  34567890

Do you want me to update your "/home/user/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds. In order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with
poor time synchronization, you can increase the window from its default
size of +-1min (window size of 3) to about +-4min (window size of
17 acceptable tokens).
Do you want to do so? (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

Notice this is not run as root.

Select default option (y in most cases) for all the questions it asks and remember to save the emergency scratch codes.

Make a backup of PAM's SSH configuration file /etc/pam.d/sshd:

sudo cp --archive /etc/pam.d/sshd /etc/pam.d/sshd-COPY-$(date +"%Y%m%d%H%M%S")

Now we need to enable it as an authentication method for SSH by adding this line to /etc/pam.d/sshd:

auth       required     pam_google_authenticator.so nullok

Note: Check here for what nullok means.

For the lazy:

echo -e "\nauth       required     pam_google_authenticator.so nullok         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/pam.d/sshd

Tell SSH to leverage it by adding or editing this line in /etc/ssh/sshd_config:

ChallengeResponseAuthentication yes

For the lazy:

sudo sed -i -r -e "s/^(challengeresponseauthentication .*)$/# \1         # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/I" /etc/ssh/sshd_config
echo -e "\nChallengeResponseAuthentication yes         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/ssh/sshd_config

Restart ssh:

sudo service sshd restart

The Basics

Limit Who Can Use sudo

Why

sudo lets accounts run commands as other accounts, including root. We want to make sure that only the accounts we want can use sudo.

Goals

  • sudo privileges limited to those who are in a group we specify

Notes

Steps

Create a group:

sudo groupadd sudousers

Add account(s) to the group:

sudo usermod -a -G sudousers user1
sudo usermod -a -G sudousers user2
sudo usermod -a -G sudousers  ...

You'll need to do this for every account on your server that needs sudo privileges.

Make a backup of the sudo's configuration file /etc/sudoers:

sudo cp --archive /etc/sudoers /etc/sudoers-COPY-$(date +"%Y%m%d%H%M%S")

Edit sudo's configuration file /etc/sudoers:

sudo visudo

Tell sudo to only allow users in the sudousers group to use sudo by adding this line if it is not already there:

%sudousers   ALL=(ALL:ALL) ALL

Limit Who Can Use su

Why

su also lets accounts run commands as other accounts, including root. We want to make sure that only the accounts we want can use su.

Goals

  • su privileges limited to those who are in a group we specify

References

Steps

Create a group:

sudo groupadd suusers

Add account(s) to the group:

sudo usermod -a -G suusers user1
sudo usermod -a -G suusers user2
sudo usermod -a -G suusers  ...

You'll need to do this for every account on your server that needs sudo privileges.

Make it so only users in this group can execute /bin/su:

sudo dpkg-statoverride --update --add root suusers 4750 /bin/su

Run applications in a sandbox with FireJail

Why

It's absolutely better, for many applications, to run in a sandbox.

Browsers (even more the Closed Source ones) and eMail Clients are highly suggested.

Goals

  • confine applications in a jail (few safe directories) and block access to the rest of the system

References

Steps

Install the software:

sudo apt install firejail firejail-profiles

Note: for Debian 10 Stable, official Backport is suggested:

sudo apt install -t buster-backports firejail firejail-profiles

Allow an application (installed in /usr/bin or /bin) to run only in a sandbox (see few examples below here):

sudo ln -s /usr/bin/firejail /usr/local/bin/google-chrome-stable
sudo ln -s /usr/bin/firejail /usr/local/bin/firefox
sudo ln -s /usr/bin/firejail /usr/local/bin/chromium
sudo ln -s /usr/bin/firejail /usr/local/bin/evolution
sudo ln -s /usr/bin/firejail /usr/local/bin/thunderbird

Run the application as usual (via terminal or launcher) and check if is running in a jail:

firejail --list

Allow a sandboxed app to run again as it was before (example: firefox)

sudo rm /usr/local/bin/firefox

NTP Client

Why

Many security protocols leverage the time. If your system time is incorrect, it could have negative impacts to your server. An NTP client can solve that problem by keeping your system time in-sync with global NTP servers

How It Works

NTP stands for Network Time Protocol. In the context of this guide, an NTP client on the server is used to update the server time with the official time pulled from official servers. Check https://www.pool.ntp.org/en/ for all of the public NTP servers.

Goals

  • NTP client installed and keeping server time in-sync

References

Steps

Install ntp.

On Debian based systems:

sudo apt install ntp

Make a backup of the NTP client's configuration file /etc/ntp.conf:

sudo cp --archive /etc/ntp.conf /etc/ntp.conf-COPY-$(date +"%Y%m%d%H%M%S")

The default configuration, at least on Debian, is already pretty secure. The only thing we'll want to make sure is we're the pool directive and not any server directives. The pool directive allows the NTP client to stop using a server if it is unresponsive or serving bad time. Do this by commenting out all server directives and adding the below to /etc/ntp.conf.

pool pool.ntp.org iburst

For the lazy:

sudo sed -i -r -e "s/^((server|pool).*)/# \1         # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/ntp.conf
echo -e "\npool pool.ntp.org iburst         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/ntp.conf

Example /etc/ntp.conf:

driftfile /var/lib/ntp/ntp.drift
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
restrict -4 default kod notrap nomodify nopeer noquery limited
restrict -6 default kod notrap nomodify nopeer noquery limited
restrict 127.0.0.1
restrict ::1
restrict source notrap nomodify noquery
pool pool.ntp.org iburst         # added by user on 2019-03-09 @ 10:23:35

Restart ntp:

sudo service ntp restart

Check the status of the ntp service:

sudo systemctl status ntp
● ntp.service - LSB: Start NTP daemon
   Loaded: loaded (/etc/init.d/ntp; generated; vendor preset: enabled)
   Active: active (running) since Sat 2019-03-09 15:19:46 EST; 4s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 1016 ExecStop=/etc/init.d/ntp stop (code=exited, status=0/SUCCESS)
  Process: 1028 ExecStart=/etc/init.d/ntp start (code=exited, status=0/SUCCESS)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/ntp.service
           └─1038 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 108:113

Mar 09 15:19:46 host ntpd[1038]: Listen and drop on 0 v6wildcard [::]:123
Mar 09 15:19:46 host ntpd[1038]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Mar 09 15:19:46 host ntpd[1038]: Listen normally on 2 lo 127.0.0.1:123
Mar 09 15:19:46 host ntpd[1038]: Listen normally on 3 enp0s3 10.10.20.96:123
Mar 09 15:19:46 host ntpd[1038]: Listen normally on 4 lo [::1]:123
Mar 09 15:19:46 host ntpd[1038]: Listen normally on 5 enp0s3 [fe80::a00:27ff:feb6:ed8e%2]:123
Mar 09 15:19:46 host ntpd[1038]: Listening on routing socket on fd #22 for interface updates
Mar 09 15:19:47 host ntpd[1038]: Soliciting pool server 108.61.56.35
Mar 09 15:19:48 host ntpd[1038]: Soliciting pool server 69.89.207.199
Mar 09 15:19:49 host ntpd[1038]: Soliciting pool server 45.79.111.114

Check ntp's status:

sudo ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 pool.ntp.org    .POOL.          16 p    -   64    0    0.000    0.000   0.000
*lithium.constan 198.30.92.2      2 u    -   64    1   19.900    4.894   3.951
 ntp2.wiktel.com 212.215.1.157    2 u    2   64    1   48.061   -0.431   0.104

Securing /proc

Why

To quote https://linux-audit.com/linux-system-hardening-adding-hidepid-to-proc/:

When looking in /proc you will discover a lot of files and directories. Many of them are just numbers, which represent the information about a particular process ID (PID). By default, Linux systems are deployed to allow all local users to see this all information. This includes process information from other users. This could include sensitive details that you may not want to share with other users. By applying some filesystem configuration tweaks, we can change this behavior and improve the security of the system.

Note: This may break on some systemd systems. Please see https://github.com/imthenachoman/How-To-Secure-A-Linux-Server/issues/37 for more information. Thanks to nlgranger for sharing.

Goals

  • /proc mounted with hidepid=2 so users can only see information about their processes

References

Steps

Make a backup of /etc/fstab:

sudo cp --archive /etc/fstab /etc/fstab-COPY-$(date +"%Y%m%d%H%M%S")

Add this line to /etc/fstab to have /proc mounted with hidepid=2:

proc     /proc     proc     defaults,hidepid=2     0     0

For the lazy:

echo -e "\nproc     /proc     proc     defaults,hidepid=2     0     0         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/fstab

Reboot the system:

sudo reboot now

Note: Alternatively, you can remount /proc without rebooting with sudo mount -o remount,hidepid=2 /proc

Force Accounts To Use Secure Passwords

Why

By default, accounts can use any password they want, including bad ones. pwquality/pam_pwquality addresses this security gap by providing "a way to configure the default password quality requirements for the system passwords" and checking "its strength against a system dictionary and a set of rules for identifying poor choices."

How It Works

On Linux, PAM is responsible for authentication. There are four tasks to PAM that you can read about at https://en.wikipedia.org/wiki/Linux_PAM. This section talks about the password task.

When there is a need to set or change an account password, the password task of PAM handles the request. In this section we will tell PAM's password task to pass the requested new password to libpam-pwquality to make sure it meets our requirements. If the requirements are met it is used/set; if it does not meet the requirements it errors and lets the user know.

Goals

  • enforced strong passwords

Steps

Install libpam-pwquality.

On Debian based systems:

sudo apt install libpam-pwquality

Make a backup of PAM's password configuration file /etc/pam.d/common-password:

sudo cp --archive /etc/pam.d/common-password /etc/pam.d/common-password-COPY-$(date +"%Y%m%d%H%M%S")

Tell PAM to use libpam-pwquality to enforce strong passwords by editing the file /etc/pam.d/common-password and change the line that starts like this:

password        requisite                       pam_pwquality.so

to this:

password        requisite                       pam_pwquality.so retry=3 minlen=10 difok=3 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1 maxrepeat=3 gecoschec

The above options are:

  • retry=3 = prompt user 3 times before returning with error.
  • minlen=10 = the minimum length of the password, factoring in any credits (or debits) from these:
    • dcredit=-1 = must have at least one digit
    • ucredit=-1 = must have at least one upper case letter
    • lcredit=-1 = must have at least one lower case letter
    • ocredit=-1 = must have at least one non-alphanumeric character
  • difok=3 = at least 3 characters from the new password cannot have been in the old password
  • maxrepeat=3 = allow a maximum of 3 repeated characters
  • gecoschec = do not allow passwords with the account's name
sudo sed -i -r -e "s/^(password\s+requisite\s+pam_pwquality.so)(.*)$/# \1\2         # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n\1 retry=3 minlen=10 difok=3 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1 maxrepeat=3 gecoschec         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/pam.d/common-password

Automatic Security Updates and Alerts

Why

It is important to keep a server updated with the latest critical security patches and updates. Otherwise you're at risk of known security vulnerabilities that bad-actors could use to gain unauthorized access to your server.

Unless you plan on checking your server every day, you'll want a way to automatically update the system and/or get emails about available updates.

You don't want to do all updates because with every update there is a risk of something breaking. It is important to do the critical updates but everything else can wait until you have time to do it manually.

Why Not

Automatic and unattended updates may break your system and you may not be near your server to fix it. This would be especially problematic if it broke your SSH access.

Notes

  • Each distribution manages packages and updates differently. So far I only have steps for Debian based systems.
  • Your server will need a way to send e-mails for this to work

Goals

  • Automatic, unattended, updates of critical security patches
  • Automatic emails of remaining pending updates

Debian Based Systems

How It Works

On Debian based systems you can use:

  • unattended-upgrades to automatically do system updates you want (i.e. critical security updates)
  • apt-listchanges to get details about package changes before they are installed/upgraded
  • apticron to get emails for pending package updates

We will use unattended-upgrades to apply critical security patches. We can also apply stable updates since they've already been thoroughly tested by the Debian community.

References

Steps

Install unattended-upgrades, apt-listchanges, and apticron:

sudo apt install unattended-upgrades apt-listchanges apticron

Now we need to configure unattended-upgrades to automatically apply the updates. This is typically done by editing the files /etc/apt/apt.conf.d/20auto-upgrades and /etc/apt/apt.conf.d/50unattended-upgrades that were created by the packages. However, because these file may get overwritten with a future update, we'll create a new file instead. Create the file /etc/apt/apt.conf.d/51myunattended-upgrades and add this:

// Enable the update/upgrade script (0=disable)
APT::Periodic::Enable "1";

// Do "apt-get update" automatically every n-days (0=disable)
APT::Periodic::Update-Package-Lists "1";

// Do "apt-get upgrade --download-only" every n-days (0=disable)
APT::Periodic::Download-Upgradeable-Packages "1";

// Do "apt-get autoclean" every n-days (0=disable)
APT::Periodic::AutocleanInterval "7";

// Send report mail to root
//     0:  no report             (or null string)
//     1:  progress report       (actually any string)
//     2:  + command outputs     (remove -qq, remove 2>/dev/null, add -d)
//     3:  + trace on    APT::Periodic::Verbose "2";
APT::Periodic::Unattended-Upgrade "1";

// Automatically upgrade packages from these
Unattended-Upgrade::Origins-Pattern {
      "o=Debian,a=stable";
      "o=Debian,a=stable-updates";
      "origin=Debian,codename=${distro_codename},label=Debian-Security";
};

// You can specify your own packages to NOT automatically upgrade here
Unattended-Upgrade::Package-Blacklist {
};

// Run dpkg --force-confold --configure -a if a unclean dpkg state is detected to true to ensure that updates get installed even when the system got interrupted during a previous run
Unattended-Upgrade::AutoFixInterruptedDpkg "true";

//Perform the upgrade when the machine is running because we wont be shutting our server down often
Unattended-Upgrade::InstallOnShutdown "false";

// Send an email to this address with information about the packages upgraded.
Unattended-Upgrade::Mail "root";

// Always send an e-mail
Unattended-Upgrade::MailOnlyOnError "false";

// Remove all unused dependencies after the upgrade has finished
Unattended-Upgrade::Remove-Unused-Dependencies "true";

// Remove any new unused dependencies after the upgrade has finished
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";

// Automatically reboot WITHOUT CONFIRMATION if the file /var/run/reboot-required is found after the upgrade.
Unattended-Upgrade::Automatic-Reboot "true";

// Automatically reboot even if users are logged in.
Unattended-Upgrade::Automatic-Reboot-WithUsers "true";

Notes:

Run a dry-run of unattended-upgrades to make sure your configuration file is okay:

sudo unattended-upgrade -d --dry-run

If everything is okay, you can let it run whenever it's scheduled to or force a run with unattended-upgrade -d.

Configure apt-listchanges to your liking:

sudo dpkg-reconfigure apt-listchanges

For apticron, the default settings are good enough but you can check them in /etc/apticron/apticron.conf if you want to change them. For example, my configuration looks like this:

EMAIL="root"
NOTIFY_NO_UPDATES="1"

More Secure Random Entropy Pool (WIP)

Why

WIP

How It Works

WIP

Goals

WIP

References

Steps

Install rng-tools.

On Debian based systems:

sudo apt-get install rng-tools

Now we need to set the hardware device used to generate random numbers by adding this to /etc/default/rng-tools:

HRNGDEVICE=/dev/urandom

For the lazy:

echo "HRNGDEVICE=/dev/urandom" | sudo tee -a /etc/default/rng-tools

Restart the service:

sudo systemctl stop rng-tools.service
sudo systemctl start rng-tools.service

Test randomness:

The Network

Firewall With UFW (Uncomplicated Firewall)

Why

Call me paranoid, and you don't have to agree, but I want to deny all traffic in and out of my server except what I explicitly allow. Why would my server be sending traffic out that I don't know about? And why would external traffic be trying to access my server if I don't know who or what it is? When it comes to good security, my opinion is to reject/deny by default, and allow by exception.

Of course, if you disagree, that is totally fine and can configure UFW to suit your needs.

Either way, ensuring that only traffic we explicitly allow is the job of a firewall.

How It Works

The Linux kernel provides capabilities to monitor and control network traffic. These capabilities are exposed to the end-user through firewall utilities. On Linux, the most common firewall is iptables. However, iptables is rather complicated and confusing (IMHO). This is where UFW comes in. Think of UFW as a front-end to iptables. It simplifies the process of managing the iptables rules that tell the Linux kernel what to do with network traffic.

UFW works by letting you configure rules that:

  • allow or deny
  • input or output traffic
  • to or from ports

You can create rules by explicitly specifying the ports or with application configurations that specify the ports.

Goals

  • all network traffic, input and output, blocked except those we explicitly allow

Notes

  • As you install other programs, you'll need to enable the necessary ports/applications.

References

Steps

Install ufw.

On Debian based systems:

sudo apt install ufw

Deny all outgoing traffic:

sudo ufw default deny outgoing comment 'deny all outgoing traffic'
Default outgoing policy changed to 'deny'
(be sure to update your rules accordingly)

If you are not as paranoid as me, and don't want to deny all outgoing traffic, you can allow it instead:

sudo ufw default allow outgoing comment 'allow all outgoing traffic'

Deny all incoming traffic:

sudo ufw default deny incoming comment 'deny all incoming traffic'

Obviously we want SSH connections in:

sudo ufw limit in ssh comment 'allow SSH connections in'
Rules updated
Rules updated (v6)

Allow additional traffic as per your needs. Some common use-cases:

# allow traffic out on port 53 -- DNS
sudo ufw allow out 53 comment 'allow DNS calls out'

# allow traffic out on port 123 -- NTP
sudo ufw allow out 123 comment 'allow NTP out'

# allow traffic out for HTTP, HTTPS, or FTP
# apt might needs these depending on which sources you're using
sudo ufw allow out http comment 'allow HTTP traffic out'
sudo ufw allow out https comment 'allow HTTPS traffic out'
sudo ufw allow out ftp comment 'allow FTP traffic out'

# allow whois
sudo ufw allow out whois comment 'allow whois'

# allow traffic out on port 68 -- the DHCP client
# you only need this if you're using DHCP
sudo ufw allow out 67 comment 'allow the DHCP client to update'
sudo ufw allow out 68 comment 'allow the DHCP client to update'

Start ufw:

sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup

If you want to see a status:

sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
22/tcp                     LIMIT       Anywhere                   # allow SSH connections in
22/tcp (v6)                LIMIT       Anywhere (v6)              # allow SSH connections in

53                         ALLOW OUT   Anywhere                   # allow DNS calls out
123                        ALLOW OUT   Anywhere                   # allow NTP out
80/tcp                     ALLOW OUT   Anywhere                   # allow HTTP traffic out
443/tcp                    ALLOW OUT   Anywhere                   # allow HTTPS traffic out
21/tcp                     ALLOW OUT   Anywhere                   # allow FTP traffic out
Mail submission            ALLOW OUT   Anywhere                   # allow mail out
43/tcp                     ALLOW OUT   Anywhere                   # allow whois
53 (v6)                    ALLOW OUT   Anywhere (v6)              # allow DNS calls out
123 (v6)                   ALLOW OUT   Anywhere (v6)              # allow NTP out
80/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow HTTP traffic out
443/tcp (v6)               ALLOW OUT   Anywhere (v6)              # allow HTTPS traffic out
21/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow FTP traffic out
Mail submission (v6)       ALLOW OUT   Anywhere (v6)              # allow mail out
43/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow whois

or

sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), deny (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     LIMIT IN    Anywhere                   # allow SSH connections in
22/tcp (v6)                LIMIT IN    Anywhere (v6)              # allow SSH connections in

53                         ALLOW OUT   Anywhere                   # allow DNS calls out
123                        ALLOW OUT   Anywhere                   # allow NTP out
80/tcp                     ALLOW OUT   Anywhere                   # allow HTTP traffic out
443/tcp                    ALLOW OUT   Anywhere                   # allow HTTPS traffic out
21/tcp                     ALLOW OUT   Anywhere                   # allow FTP traffic out
587/tcp (Mail submission)  ALLOW OUT   Anywhere                   # allow mail out
43/tcp                     ALLOW OUT   Anywhere                   # allow whois
53 (v6)                    ALLOW OUT   Anywhere (v6)              # allow DNS calls out
123 (v6)                   ALLOW OUT   Anywhere (v6)              # allow NTP out
80/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow HTTP traffic out
443/tcp (v6)               ALLOW OUT   Anywhere (v6)              # allow HTTPS traffic out
21/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow FTP traffic out
587/tcp (Mail submission (v6)) ALLOW OUT   Anywhere (v6)              # allow mail out
43/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow whois

Default Applications

ufw ships with some default applications. You can see them with:

sudo ufw app list
Available applications:
  AIM
  Bonjour
  CIFS
  DNS
  Deluge
  IMAP
  IMAPS
  IPP
  KTorrent
  Kerberos Admin
  Kerberos Full
  Kerberos KDC
  Kerberos Password
  LDAP
  LDAPS
  LPD
  MSN
  MSN SSL
  Mail submission
  NFS
  OpenSSH
  POP3
  POP3S
  PeopleNearby
  SMTP
  SSH
  Socks
  Telnet
  Transmission
  Transparent Proxy
  VNC
  WWW
  WWW Cache
  WWW Full
  WWW Secure
  XMPP
  Yahoo
  qBittorrent
  svnserve

To get details about the app, like which ports it includes, type:

sudo ufw app info [app name]
sudo ufw app info DNS
Profile: DNS
Title: Internet Domain Name Server
Description: Internet Domain Name Server

Port:
  53

Custom Application

If you don't want to create rules by explicitly providing the port number(s), you can create your own application configurations. To do this, create a file in /etc/ufw/applications.d.

For example, here is what you would use for Plex:

cat /etc/ufw/applications.d/plexmediaserver
[PlexMediaServer]
title=Plex Media Server
description=This opens up PlexMediaServer for http (32400), upnp, and autodiscovery.
ports=32469/tcp|32413/udp|1900/udp|32400/tcp|32412/udp|32410/udp|32414/udp|32400/udp

Then you can enable it like any other app:

sudo ufw allow plexmediaserver

iptables Intrusion Detection And Prevention with PSAD

Why

Even if you have a firewall to guard your doors, it is possible to try brute-forcing your way in any of the guarded doors. We want to monitor all network activity to detect potential intrusion attempts, such has repeated attempts to get in, and block them.

How It Works

I can't explain it any better than user FINESEC from https://serverfault.com/ did at: https://serverfault.com/a/447604/289829.

Fail2BAN scans log files of various applications such as apache, ssh or ftp and automatically bans IPs that show the malicious signs such as automated login attempts. PSAD on the other hand scans iptables and ip6tables log messages (typically /var/log/messages) to detect and optionally block scans and other types of suspect traffic such as DDoS or OS fingerprinting attempts. It's ok to use both programs at the same time because they operate on different level.

And, since we're already using UFW so we'll follow the awesome instructions by netson at https://gist.github.com/netson/c45b2dc4e835761fbccc to make PSAD work with UFW.

References

Steps

Install psad.

On Debian based systems:

sudo apt install psad

Make a backup of psad's configuration file /etc/psad/psad.conf:

sudo cp --archive /etc/psad/psad.conf /etc/psad/psad.conf-COPY-$(date +"%Y%m%d%H%M%S")

Review and update configuration options in /etc/psad/psad.conf. Pay special attention to these:

SettingSet To
EMAIL_ADDRESSESyour email address(s)
HOSTNAMEyour server's hostname
ENABLE_PSADWATCHDENABLE_PSADWATCHD Y;
ENABLE_AUTO_IDSENABLE_AUTO_IDS Y;
ENABLE_AUTO_IDS_EMAILSENABLE_AUTO_IDS_EMAILS Y;
EXPECT_TCP_OPTIONSEXPECT_TCP_OPTIONS Y;

Check the configuration file psad's documentation at http://www.cipherdyne.org/psad/docs/config.html for more details.

Now we need to make some changes to ufw so it works with psad by telling ufw to log all traffic so psad can analyze it. Do this by editing two files and adding these lines at the end but before the COMMIT line.

Make backups:

sudo cp --archive /etc/ufw/before.rules /etc/ufw/before.rules-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/ufw/before6.rules /etc/ufw/before6.rules-COPY-$(date +"%Y%m%d%H%M%S")

Edit the files:

  • /etc/ufw/before.rules
  • /etc/ufw/before6.rules

Now we need to reload/restart ufw and psad for the changes to take effect:

sudo ufw reload

sudo psad -R
sudo psad --sig-update
sudo psad -H

Analyze iptables rules for errors:

sudo psad --fw-analyze
[+] Parsing INPUT chain rules.
[+] Parsing INPUT chain rules.
[+] Firewall config looks good.
[+] Completed check of firewall ruleset.
[+] Results in /var/log/psad/fw_check
[+] Exiting.

Note: If there were any issues you will get an e-mail with the error.

Check the status of psad:

sudo psad --Status
[-] psad: pid file /var/run/psad/psadwatchd.pid does not exist for psadwatchd on vm
[+] psad_fw_read (pid: 3444)  %CPU: 0.0  %MEM: 2.2
    Running since: Sat Feb 16 01:03:09 2019

[+] psad (pid: 3435)  %CPU: 0.2  %MEM: 2.7
    Running since: Sat Feb 16 01:03:09 2019
    Command line arguments: [none specified]
    Alert email address(es): root@localhost

[+] Version: psad v2.4.3

[+] Top 50 signature matches:
        [NONE]

[+] Top 25 attackers:
        [NONE]

[+] Top 20 scanned ports:
        [NONE]

[+] iptables log prefix counters:
        [NONE]

    Total protocol packet counters:

[+] IP Status Detail:
        [NONE]

    Total scan sources: 0
    Total scan destinations: 0

[+] These results are available in: /var/log/psad/status.out

Application Intrusion Detection And Prevention With Fail2Ban

Why

UFW tells your server what doors to board up so nobody can see them, and what doors to allow authorized users through. PSAD monitors network activity to detect and prevent potential intrusions -- repeated attempts to get in.

But what about the applications/services your server is running, like SSH and Apache, where your firewall is configured to allow access in. Even though access may be allowed that doesn't mean all access attempts are valid and harmless. What if someone tries to brute-force their way in to a web-app you're running on your server? This is where Fail2ban comes in.

How It Works

Fail2ban monitors the logs of your applications (like SSH and Apache) to detect and prevent potential intrusions. It will monitor network traffic/logs and prevent intrusions by blocking suspicious activity (e.g. multiple successive failed connections in a short time-span).

Goals

  • network monitoring for suspicious activity with automatic banning of offending IPs

Notes

  • As of right now, the only thing running on this server is SSH so we'll want Fail2ban to monitor SSH and ban as necessary.
  • As you install other programs, you'll need to create/configure the appropriate jails and enable them.

References

Steps

Install fail2ban.

On Debian based systems:

sudo apt install fail2ban

We don't want to edit /etc/fail2ban/fail2ban.conf or /etc/fail2ban/jail.conf because a future update may overwrite those so we'll create a local copy instead. Create the file /etc/fail2ban/jail.local and add this to it after replacing [LAN SEGMENT] and [your email] with the appropriate values:

[DEFAULT]
# the IP address range we want to ignore
ignoreip = 127.0.0.1/8 [LAN SEGMENT]

# who to send e-mail to
destemail = [your e-mail]

# who is the email from
sender = [your e-mail]

# since we're using exim4 to send emails
mta = mail

# get email alerts
action = %(action_mwl)s

Note: Your server will need to be able to send e-mails so Fail2ban can let you know of suspicious activity and when it banned an IP.

We need to create a jail for SSH that tells fail2ban to look at SSH logs and use ufw to ban/unban IPs as needed. Create a jail for SSH by creating the file /etc/fail2ban/jail.d/ssh.local and adding this to it:

[sshd]
enabled = true
banaction = ufw
port = ssh
filter = sshd
logpath = %(sshd_log)s
maxretry = 5

For the lazy:

cat << EOF | sudo tee /etc/fail2ban/jail.d/ssh.local
[sshd]
enabled = true
banaction = ufw
port = ssh
filter = sshd
logpath = %(sshd_log)s
maxretry = 5
EOF

In the above we tell fail2ban to use the ufw as the banaction. Fail2ban ships with an action configuration file for ufw. You can see it in /etc/fail2ban/action.d/ufw.conf

Enable fail2ban:

sudo fail2ban-client start
sudo fail2ban-client reload
sudo fail2ban-client add sshd # This may fail on some systems if the sshd jail was added by default

To check the status:

sudo fail2ban-client status
Status
|- Number of jail:      1
`- Jail list:   sshd
sudo fail2ban-client status sshd
Status for the jail: sshd
|- Filter
|  |- Currently failed: 0
|  |- Total failed:     0
|  `- File list:        /var/log/auth.log
`- Actions
   |- Currently banned: 0
   |- Total banned:     0
   `- Banned IP list:

Custom Jails

I have not needed to create a custom jail yet. Once I do, and I figure out how, I will update this guide. Or, if you know how please help contribute.

Unban an IP

To unban an IP use this command:

fail2ban-client set [jail] unbanip [IP]

[jail] is the name of the jail that has the banned IP and [IP] is the IP address you want to unban. For example, to unaban 192.168.1.100 from SSH you would do:

fail2ban-client set sshd unbanip 192.168.1.100

The Auditing

File/Folder Integrity Monitoring With AIDE (WIP)

Why

WIP

How It Works

WIP

Goals

WIP

References

Steps

Install AIDE.

On Debian based systems:

sudo apt install aide

Make a backup of AIDE's defaults file:

sudo cp -p /etc/default/aide /etc/default/aide-COPY-$(date +"%Y%m%d%H%M%S")

Go through /etc/default/aide and set AIDE's defaults per your requirements. If you want AIDE to run daily and e-mail you, be sure to set CRON_DAILY_RUN to yes.

Make a backup of AIDE's configuration files:

sudo cp -pr /etc/aide /etc/aide-COPY-$(date +"%Y%m%d%H%M%S")

On Debian based systems:

  • AIDE's configuration files are in /etc/aide/aide.conf.d/.
  • You'll want to go through AIDE's documentation and the configuration files in to set them per your requirements.
  • If you want new settings, to monitor a new folder for example, you'll want to add them to /etc/aide/aide.conf or /etc/aide/aide.conf.d/.
  • Take a backup of the stock configuration files: sudo cp -pr /etc/aide /etc/aide-COPY-$(date +"%Y%m%d%H%M%S").

Create a new database, and install it.

On Debian based systems:

sudo aideinit
Running aide --init...
Start timestamp: 2019-04-01 21:23:37 -0400 (AIDE 0.16)
AIDE initialized database at /var/lib/aide/aide.db.new
Verbose level: 6

Number of entries:      25973

---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------

/var/lib/aide/aide.db.new
  RMD160   : moyQ1YskQQbidX+Lusv3g2wf1gQ=
  TIGER    : 7WoOgCrXzSpDrlO6I3PyXPj1gRiaMSeo
  SHA256   : gVx8Fp7r3800WF2aeXl+/KHCzfGsNi7O
             g16VTPpIfYQ=
  SHA512   : GYfa0DJwWgMLl4Goo5VFVOhu4BphXCo3
             rZnk49PYztwu50XjaAvsVuTjJY5uIYrG
             tV+jt3ELvwFzGefq4ZBNMg==
  CRC32    : /cusZw==
  HAVAL    : E/i5ceF3YTjwenBfyxHEsy9Kzu35VTf7
             CPGQSW4tl14=
  GOST     : n5Ityzxey9/1jIs7LMc08SULF1sLBFUc
             aMv7Oby604A=


End timestamp: 2019-04-01 21:24:45 -0400 (run time: 1m 8s)

Test everything works with no changes.

On Debian based systems:

sudo aide.wrapper --check
Start timestamp: 2019-04-01 21:24:45 -0400 (AIDE 0.16)
AIDE found NO differences between database and filesystem. Looks okay!!
Verbose level: 6

Number of entries:      25973

---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------

/var/lib/aide/aide.db
  RMD160   : moyQ1YskQQbidX+Lusv3g2wf1gQ=
  TIGER    : 7WoOgCrXzSpDrlO6I3PyXPj1gRiaMSeo
  SHA256   : gVx8Fp7r3800WF2aeXl+/KHCzfGsNi7O
             g16VTPpIfYQ=
  SHA512   : GYfa0DJwWgMLl4Goo5VFVOhu4BphXCo3
             rZnk49PYztwu50XjaAvsVuTjJY5uIYrG
             tV+jt3ELvwFzGefq4ZBNMg==
  CRC32    : /cusZw==
  HAVAL    : E/i5ceF3YTjwenBfyxHEsy9Kzu35VTf7
             CPGQSW4tl14=
  GOST     : n5Ityzxey9/1jIs7LMc08SULF1sLBFUc
             aMv7Oby604A=


End timestamp: 2019-04-01 21:26:03 -0400 (run time: 1m 18s)

Test everything works after making some changes.

On Debian based systems:

sudo touch /etc/test.sh
sudo touch /root/test.sh

sudo aide.wrapper --check

sudo rm /etc/test.sh
sudo rm /root/test.sh

sudo aideinit -y -f
Start timestamp: 2019-04-01 21:37:37 -0400 (AIDE 0.16)
AIDE found differences between database and filesystem!!
Verbose level: 6

Summary:
  Total number of entries:      25972
  Added entries:                2
  Removed entries:              0
  Changed entries:              1

---------------------------------------------------
Added entries:
---------------------------------------------------

f++++++++++++++++: /etc/test.sh
f++++++++++++++++: /root/test.sh

---------------------------------------------------
Changed entries:
---------------------------------------------------

d =.... mc.. .. .: /root

---------------------------------------------------
Detailed information about changes:
---------------------------------------------------

Directory: /root
  Mtime    : 2019-04-01 21:35:07 -0400        | 2019-04-01 21:37:36 -0400
  Ctime    : 2019-04-01 21:35:07 -0400        | 2019-04-01 21:37:36 -0400


---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------

/var/lib/aide/aide.db
  RMD160   : qF9WmKaf2PptjKnhcr9z4ueCPTY=
  TIGER    : zMo7MvvYJcq1hzvTQLPMW7ALeFiyEqv+
  SHA256   : LSLLVjjV6r8vlSxlbAbbEsPcQUB48SgP
             pdVqEn6ZNbQ=
  SHA512   : Qc4U7+ZAWCcitapGhJ1IrXCLGCf1IKZl
             02KYL1gaZ0Fm4dc7xLqjiquWDMSEbwzW
             oz49NCquqGz5jpMIUy7UxA==
  CRC32    : z8ChEA==
  HAVAL    : YapzS+/cdDwLj3kHJEq8fufLp3DPKZDg
             U12KCSkrO7Y=
  GOST     : 74sLV4HkTig+GJhokvxZQm7CJD/NR0mG
             6jV7zdt5AXQ=


End timestamp: 2019-04-01 21:38:50 -0400 (run time: 1m 13s)

That's it. If you set CRON_DAILY_RUN to yes in /etc/default/aide then cron will execute /etc/cron.daily/aide every day and e-mail you the output.

Updating The Database

Every time you make changes to files/folders that AIDE monitors, you will need to update the database to capture those changes. To do that on Debian based systems:

sudo aideinit -y -f

Anti-Virus Scanning With ClamAV (WIP)

Why

WIP

How It Works

  • ClamAV is a virus scanner
  • ClamAV-Freshclam is a service that keeps the virus definitions updated
  • ClamAV-Daemon keeps the clamd process running to make scanning faster

Goals

WIP

Notes

  • These instructions do not tell you how to enable the ClamAV daemon service to ensure clamd is running all the time. clamd is only if you're running a mail server and does not provide real-time monitoring of files. Instead, you'd want to scan files manually or on a schedule.

References

Steps

Install ClamAV.

On Debian based systems:

sudo apt install clamav clamav-freshclam clamav-daemon

Make a backup of clamav-freshclam's configuration file /etc/clamav/freshclam.conf:

sudo cp --archive /etc/clamav/freshclam.conf /etc/clamav/freshclam.conf-COPY-$(date +"%Y%m%d%H%M%S")

clamav-freshclam's default settings are probably good enough but if you want to change them, you can either edit the file /etc/clamav/freshclam.conf or use dpkg-reconfigure:

sudo dpkg-reconfigure clamav-freshclam

Note: The default settings will update the definitions 24 times in a day. To change the interval, check the Checks setting in /etc/clamav/freshclam.conf or use dpkg-reconfigure.

Start the clamav-freshclam service:

sudo service clamav-freshclam start

You can make sure clamav-freshclam running:

sudo service clamav-freshclam status
● clamav-freshclam.service - ClamAV virus database updater
   Loaded: loaded (/lib/systemd/system/clamav-freshclam.service; enabled; vendor preset: enabled)   Active: active (running) since Sat 2019-03-16 22:57:07 EDT; 2min 13s ago
     Docs: man:freshclam(1)
           man:freshclam.conf(5)
           https://www.clamav.net/documents
 Main PID: 1288 (freshclam)
   CGroup: /system.slice/clamav-freshclam.service
           └─1288 /usr/bin/freshclam -d --foreground=true

Mar 16 22:57:08 host freshclam[1288]: Sat Mar 16 22:57:08 2019 -> ^Local version: 0.100.2 Recommended version: 0.101.1
Mar 16 22:57:08 host freshclam[1288]: Sat Mar 16 22:57:08 2019 -> DON'T PANIC! Read https://www.clamav.net/documents/upgrading-clamav
Mar 16 22:57:15 host freshclam[1288]: Sat Mar 16 22:57:15 2019 -> Downloading main.cvd [100%]
Mar 16 22:57:38 host freshclam[1288]: Sat Mar 16 22:57:38 2019 -> main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Mar 16 22:57:40 host freshclam[1288]: Sat Mar 16 22:57:40 2019 -> Downloading daily.cvd [100%]
Mar 16 22:58:13 host freshclam[1288]: Sat Mar 16 22:58:13 2019 -> daily.cvd updated (version: 25390, sigs: 1520006, f-level: 63, builder: raynman)
Mar 16 22:58:14 host freshclam[1288]: Sat Mar 16 22:58:14 2019 -> Downloading bytecode.cvd [100%]
Mar 16 22:58:16 host freshclam[1288]: Sat Mar 16 22:58:16 2019 -> bytecode.cvd updated (version: 328, sigs: 94, f-level: 63, builder: neo)
Mar 16 22:58:24 host freshclam[1288]: Sat Mar 16 22:58:24 2019 -> Database updated (6086349 signatures) from db.local.clamav.net (IP: 104.16.219.84)
Mar 16 22:58:24 host freshclam[1288]: Sat Mar 16 22:58:24 2019 -> ^Clamd was NOT notified: Can't connect to clamd through /var/run/clamav/clamd.ctl: No such file or directory

Note: Don't worry about that Local version line. Check https://serverfault.com/questions/741299/is-there-a-way-to-keep-clamav-updated-on-debian-8 for more details.

Make a backup of clamav-daemon's configuration file /etc/clamav/clamd.conf:

sudo cp --archive /etc/clamav/clamd.conf /etc/clamav/clamd.conf-COPY-$(date +"%Y%m%d%H%M%S")

You can change clamav-daemon's settings by editing the file /etc/clamav/clamd.conf or useing dpkg-reconfigure:

sudo dpkg-reconfigure clamav-daemon

Scanning Files/Folders

  • To scan files/folders use the clamscan program.
  • clamscan runs as the user it is executed as so it needs read permissions to the files/folders it is scanning.
  • Using clamscan as root is dangerous because if a file is in fact a virus there is risk that it could use the root privileges.
  • To scan a file: clamscan /path/to/file.
  • To scan a directory: clamscan -r /path/to/folder.
  • You can use the -i switch to only print infected files.
  • Check clamscan's man pages for other switches/options.

Rootkit Detection With Rkhunter (WIP)

Why

WIP

How It Works

WIP

Goals

WIP

References

Steps

Install Rkhunter.

On Debian based systems:

sudo apt install rkhunter

Make a backup of rkhunter' defaults file:

sudo cp -p /etc/default/rkhunter /etc/default/rkhunter-COPY-$(date +"%Y%m%d%H%M%S")

rkhunter's configuration file is /etc/rkhunter.conf. Instead of making changes to it, create and use the file /etc/rkhunter.conf.local instead:

sudo cp -p /etc/rkhunter.conf /etc/rkhunter.conf.local

Go through the configuration file /etc/rkhunter.conf.local and set to your requirements. My recommendations:

SettingNote
UPDATE_MIRRORS=1 
MIRRORS_MODE=0 
MAIL-ON-WARNING=root 
COPY_LOG_ON_ERROR=1to save a copy of the log if there is an error
PKGMGR=...set to the appropriate value per the documentation
PHALANX2_DIRTEST=1read the documentation for why
WEB_CMD=""this is to address an issue with the Debian package that disables the ability for rkhunter to self-update.
USE_LOCKING=1to prevent issues with rkhunter running multiple times
SHOW_SUMMARY_WARNINGS_NUMBER=1to see the actual number of warnings found

You want rkhunter to run every day and e-mail you the result. You can write your own script or check https://www.tecmint.com/install-rootkit-hunter-scan-for-rootkits-backdoors-in-linux/ for a sample cron script you can use.

On Debian based system, rkhunter comes with cron scripts. To enable them check /etc/default/rkhunter or use dpkg-reconfigure and say Yes to all of the questions:

sudo dpkg-reconfigure rkhunter

After you've finished with all of the changes, make sure all the settings are valid:

sudo rkhunter -C

Update rkhunter and its database:

sudo rkhunter --versioncheck
sudo rkhunter --update
sudo rkhunter --propupd

If you want to do a manual scan and see the output:

sudo rkhunter --check

Rootkit Detection With chrootkit (WIP)

Why

WIP

How It Works

WIP

Goals

WIP

References

Steps

Install chkrootkit.

On Debian based systems:

sudo apt install chkrootkit

Do a manual scan:

sudo chkrootkit
ROOTDIR is `/'
Checking `amd'...                                           not found
Checking `basename'...                                      not infected
Checking `biff'...                                          not found
Checking `chfn'...                                          not infected
Checking `chsh'...                                          not infected
...
Checking `scalper'...                                       not infected
Checking `slapper'...                                       not infected
Checking `z2'...                                            chklastlog: nothing deleted
Checking `chkutmp'...                                       chkutmp: nothing deleted
Checking `OSX_RSPLUG'...                                    not infected

Make a backup of chkrootkit's configuration file /etc/chkrootkit.conf:

sudo cp --archive /etc/chkrootkit.conf /etc/chkrootkit.conf-COPY-$(date +"%Y%m%d%H%M%S")

You want chkrootkit to run every day and e-mail you the result.

On Debian based system, chkrootkit comes with cron scripts. To enable them check /etc/chkrootkit.conf or use dpkg-reconfigure and say Yes to the first question:

sudo dpkg-reconfigure chkrootkit

logwatch - system log analyzer and reporter

Why

Your server will be generating a lot of logs that may contain important information. Unless you plan on checking your server everyday, you'll want a way to get e-mail summary of your server's logs. To accomplish this we'll use logwatch.

How It Works

logwatch scans system log files and summarizes them. You can run it directly from the command line or schedule it to run on a recurring schedule. logwatch uses service files to know how to read/summarize a log file. You can see all of the stock service files in /usr/share/logwatch/scripts/services.

logwatch's configuration file /usr/share/logwatch/default.conf/logwatch.conf specifies default options. You can override them via command line arguments.

Goals

  • Logwatch configured to send a daily e-mail summary of all of the server's status and logs

Notes

References

Steps

Install logwatch.

On Debian based systems:

sudo apt install logwatch

To see a sample of what logwatch collects you can run it directly:

sudo /usr/sbin/logwatch --output stdout --format text --range yesterday --service all

 ################### Logwatch 7.4.3 (12/07/16) ####################
        Processing Initiated: Mon Mar  4 00:05:50 2019
        Date Range Processed: yesterday
                              ( 2019-Mar-03 )
                              Period is day.
        Detail Level of Output: 5
        Type of Output/Format: stdout / text
        Logfiles for Host: host
 ##################################################################

 --------------------- Cron Begin ------------------------
...
...
 ---------------------- Disk Space End -------------------------


 ###################### Logwatch End #########################

Go through logwatch's self-documented configuration file /usr/share/logwatch/default.conf/logwatch.conf before continuing. There is no need to change anything here but pay special attention to the Output, Format, MailTo, Range, and Service as those are the ones we'll be using. For our purposes, instead of specifying our options in the configuration file, we will pass them as command line arguments in the daily cron job that executes logwatch. That way, if the configuration file is ever modified (e.g. during an update), our options will still be there.

Make a backup of logwatch's daily cron file /etc/cron.daily/00logwatch and unset the execute bit:

sudo cp --archive /etc/cron.daily/00logwatch /etc/cron.daily/00logwatch-COPY-$(date +"%Y%m%d%H%M%S")
sudo chmod -x /etc/cron.daily/00logwatch-COPY*

By default, logwatch outputs to stdout. Since the goal is to get a daily e-mail, we need to change the output type that logwatch uses to send e-mail instead. We could do this through the configuration file above, but that would apply to every time it is run -- even when we run it manually and want to see the output to the screen. Instead, we'll change the cron job that executes logwatch to send e-mail. This way, when run manually, we'll still get output to stdout and when run by cron, it'll send an e-mail. We'll also make sure it checks for all services, and change the output format to html so it's easier to read regardless of what the configuration file says. In the file /etc/cron.daily/00logwatch find the execute line and change it to:

/usr/sbin/logwatch --output mail --format html --mailto root --range yesterday --service all
#!/bin/bash

#Check if removed-but-not-purged
test -x /usr/share/logwatch/scripts/logwatch.pl || exit 0

#execute
/usr/sbin/logwatch --output mail --format html --mailto root --range yesterday --service all

#Note: It's possible to force the recipient in above command
#Just pass --mailto address@a.com instead of --output mail

For the lazy:

sudo sed -i -r -e "s,^($(sudo which logwatch).*?),# \1         # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n$(sudo which logwatch) --output mail --format html --mailto root --range yesterday --service all         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")," /etc/cron.daily/00logwatch

You can test the cron job by executing it:

sudo /etc/cron.daily/00logwatch

Note: If logwatch fails to deliver mail due to the e-mail having long lines please check https://blog.dhampir.no/content/exim4-line-length-in-debian-stretch-mail-delivery-failed-returning-message-to-sender as documented in issue #29. If you you followed Gmail and Exim4 As MTA With Implicit TLS then we already took care of this in step #7.

ss - Seeing Ports Your Server Is Listening On

Why

Ports are how applications, services, and processes communicate with each other -- either locally within your server or with other devices on the network. When you have an application or service (like SSH or Apache) running on your server, they listen for requests on specific ports.

Obviously we don't want your server listening on ports we don't know about. We'll use ss to see all the ports that services are listening on. This will help us track down and stop rogue, potentially dangerous, services.

Goals

  • find out non-localhost what ports are open and listening for connections

References

Steps

To see the all the ports listening for traffic:

sudo ss -lntup
Netid  State      Recv-Q Send-Q     Local Address:Port     Peer Address:Port
udp    UNCONN     0      0                      *:68                  *:*        users:(("dhclient",pid=389,fd=6))
tcp    LISTEN     0      128                    *:22                  *:*        users:(("sshd",pid=4390,fd=3))
tcp    LISTEN     0      128                   :::22                 :::*        users:(("sshd",pid=4390,fd=4))

Switch Explanations:

  • l = display listening sockets
  • n = do now try to resolve service names
  • t = display TCP sockets
  • u = display UDP sockets
  • p = show process information

If you see anything suspicious, like a port you're not aware of or a process you don't know, investigate and remediate as necessary.

Lynis - Linux Security Auditing

Why

From https://cisofy.com/lynis/:

Lynis is a battle-tested security tool for systems running Linux, macOS, or Unix-based operating system. It performs an extensive health scan of your systems to support system hardening and compliance testing.

Goals

  • Lynis installed

Notes

References

Steps

Install lynis. https://cisofy.com/lynis/#installation has detailed instructions on how to install it for your distribution.

On Debian based systems, using CISOFY's community software repository:

sudo apt install apt-transport-https ca-certificates host
sudo wget -O - https://packages.cisofy.com/keys/cisofy-software-public.key | sudo apt-key add -
sudo echo "deb https://packages.cisofy.com/community/lynis/deb/ stable main" | sudo tee /etc/apt/sources.list.d/cisofy-lynis.list
sudo apt update
sudo apt install lynis host

Update it:

sudo lynis update info

Run a security audit:

sudo lynis audit system

This will scan your server, report its audit findings, and at the end it will give you suggestions. Spend some time going through the output and address gaps as necessary.

OSSEC - Host Intrusion Detection

Why

From https://github.com/ossec/ossec-hids

OSSEC is a full platform to monitor and control your systems. It mixes together all the aspects of HIDS (host-based intrusion detection), log monitoring and SIM/SIEM together in a simple, powerful and open source solution.

Goals

  • OSSEC-HIDS installed

References

Steps

Install OSSEC-HIDS from sources

sudo apt install libz-dev libssl-dev libpcre2-dev build-essential
wget https://github.com/ossec/ossec-hids/archive/3.6.0.tar.gz
tar xzf 3.6.0.tar.gz
cd ossec-hids-3.6.0/
sudo ./install.sh

Useful commands:

Agent information

 sudo /var/ossec/bin/agent_control -i <AGENT_ID>

AGENT_ID by default is 000, to be sure the command sudo /var/ossec/bin/agent_control -l can be used.

Run integrity/rootkit checking

OSSEC by default run rootkit check each 2 hours.

 sudo /var/ossec/bin/agent_control -u <AGENT_ID> -r 

Alerts

  • All:
tail -f /var/ossec/logs/alerts/alerts.log
  • Integrity check:
sudo cat /var/ossec/logs/alerts/alerts.log | grep -A4  -i integrity
  • Rootkit check:
 sudo cat /var/ossec/logs/alerts/alerts.log | grep -A4  "rootcheck,"

The Danger Zone

Proceed At Your Own Risk

This sections cover things that are high risk because there is a possibility they can make your system unusable, or are considered unnecessary by many because the risks outweigh any rewards.

!! PROCEED AT YOUR OWN RISK !!

!! PROCEED AT YOUR OWN RISK !!

Linux Kernel sysctl Hardening

!! PROCEED AT YOUR OWN RISK !!

Why

The kernel is the brains of a Linux system. Securing it just makes sense.

Why Not

Changing kernel settings with sysctl is risky and could break your server. If you don't know what you are doing, don't have the time to debug issues, or just don't want to take the risks, I would advise from not following these steps.

Disclaimer

I am not as knowledgeable about hardening/securing a Linux kernel as I'd like. As much as I hate to admit it, I do not know what all of these settings do. My understanding is that most of them are general kernel hardening and performance, and the others are to protect against spoofing and DOS attacks.

In fact, since I am not 100% sure exactly what each setting does, I took recommended settings from numerous sites (all linked in the references below) and combined them to figure out what should be set. I figure if multiple reputable sites mention the same setting, it's probably safe.

If you have a better understanding of what these settings do, or have any other feedback/advice on them, please let me know.

I won't provide For the lazy code in this section.

Notes

  • Documentation on all the sysctl settings/keys is severely lacking. The documentation I can find seems to reference the 2.2 version kernel. I could not find anything newer. If you know where I can, please let me know.
  • The reference sites listed below have more comments on what each setting does.

References

Steps

The sysctl settings can be found in the linux-kernel-sysctl-hardening.md file in this repo.

Before you make a kernel sysctl change permanent, you can test it with the sysctl command:

sudo sysctl -w [key=value]

Example:

sudo sysctl -w kernel.ctrl-alt-del=0

Note: There are no spaces in key=value, including before and after the space.

Once you have tested a setting, and made sure it works without breaking your server, you can make it permanent by adding the values to /etc/sysctl.conf. For example:

$ sudo cat /etc/sysctl.conf
kernel.ctrl-alt-del = 0
fs.file-max = 65535
...
kernel.sysrq = 0

After updating the file you can reload the settings or reboot. To reload:

sudo sysctl -p

Note: If sysctl has trouble writing any settings then sysctl -w or sysctl -p will write an error to stderr. You can use this to quickly find invalid settings in your /etc/sysctl.conf file:

sudo sysctl -p >/dev/null

Password Protect GRUB

!! PROCEED AT YOUR OWN RISK !!

Why

If a bad actor has physical access to your server, they could use GRUB to gain unauthorized access to your system.

Why Not

If you forget the password, you'll have to go through some work to recover the password.

Goals

  • auto boot the default Debian install and require a password for anything else

Notes

  • This will only protect GRUB and anything behind it like your operating systems. Check your motherboard's documentation for password protecting your BIOS to prevent a bad actor from circumventing GRUB.

References

Steps

Create a Password-Based Key Derivation Function 2 (PBKDF2) hash of your password:

grub-mkpasswd-pbkdf2 -c 100000

The below output is from using password as the password:

Enter password:
Reenter password:
PBKDF2 hash of your password is grub.pbkdf2.sha512.100000.2812C233DFC899EFC3D5991D8CA74068C99D6D786A54F603E9A1EFE7BAEDDB6AA89672F92589FAF98DB9364143E7A1156C9936328971A02A483A84C3D028C4FF.C255442F9C98E1F3C500C373FE195DCF16C56EEBDC55ABDD332DD36A92865FA8FC4C90433757D743776AB186BD3AE5580F63EF445472CC1D151FA03906D08A6D
grub-mkpasswd-pbkdf2 -c 100000

Copy everything after PBKDF2 hash of your password is , starting from and including grub.pbkdf2.sha512... to the end. You'll need this in the next step.

The update-grub program uses scripts to generate configuration files it will use for GRUB's settings. Create the file /etc/grub.d/01_password and add the below code after replacing [hash] with the hash you copied from the first step. This tells update-grub to use this username and password for GRUB.

#!/bin/sh
set -e

cat << EOF
set superusers="grub"
password_pbkdf2 grub [hash]
EOF

For example:

#!/bin/sh
set -e

cat << EOF
set superusers="grub"
password_pbkdf2 grub grub.pbkdf2.sha512.100000.2812C233DFC899EFC3D5991D8CA74068C99D6D786A54F603E9A1EFE7BAEDDB6AA89672F92589FAF98DB9364143E7A1156C9936328971A02A483A84C3D028C4FF.C255442F9C98E1F3C500C373FE195DCF16C56EEBDC55ABDD332DD36A92865FA8FC4C90433757D743776AB186BD3AE5580F63EF445472CC1D151FA03906D08A6D
EOF

Set the file's execute bit so update-grub includes it when it updates GRUB's configuration:

sudo chmod a+x /etc/grub.d/01_password

Make a backup of GRUB's configuration file /etc/grub.d/10_linux that we'll be modifying and unset the execute bit so update-grub doesn't try to run it:

sudo cp --archive /etc/grub.d/10_linux /etc/grub.d/10_linux-COPY-$(date +"%Y%m%d%H%M%S")
sudo chmod a-x /etc/grub.d/10_linux.*

To make the default Debian install unrestricted (without the password) while keeping everything else restricted (with the password) modify /etc/grub.d/10_linux and add --unrestricted to the CLASS variable.

For the lazy:

sudo sed -i -r -e "/^CLASS=/ a CLASS=\"\${CLASS} --unrestricted\"         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" /etc/grub.d/10_linux

Update GRUB with update-grub:

sudo update-grub


 

Disable Root Login

!! PROCEED AT YOUR OWN RISK !!

Why

If you have sudo configured properly, then the root account will mostly never need to log in directly -- either at the terminal or remotely.

Why Not

Be warned, this can cause issues with some configurations!

If your installation uses sulogin (like Debian) to drop to a root console during boot failures, then locking the root account will prevent sulogin from opening the root shell and you will get this error:

Cannot open access to console, the root account is locked.

See sulogin(8) man page for more details.

Press Enter to continue.

To work around this, you can use the --force option for sulogin. Some distributions already include this, or some other, workaround.

An alternative to locking the root acount is set a long/complicated root password and store it in a secured, non digital format. That way you have it when/if you need it.

Goals

  • locked root account that nobody can use to log in as root

Notes

  • Some distributions disable root login by default (e.g. Ubuntu) so you may not need to do this step. Check with your distribution's documentation.

References

Steps

Lock the root account:

sudo passwd -l root

Change Default umask

!! PROCEED AT YOUR OWN RISK !!

Why

umask controls the default permissions of files/folders when they are created. Insecure file/folder permissions give other accounts potentially unauthorized access to your data. This may include the ability to make configuration changes.

  • For non-root accounts, there is no need for other accounts to get any access to the account's files/folders by default.
  • For the root account, there is no need for the file/folder primary group or other accounts to have any access to root's files/folders by default.

When and if other accounts need access to a file/folder, you want to explicitly grant it using a combination of file/folder permissions and primary group.

Why Not

Changing the default umask can create unexpected problems. For example, if you set umask to 0077 for root, then non-root accounts will not have access to application configuration files/folders in /etc/ which could break applications that do not run with root privileges.

How It Works

In order to explain how umask works I'd have to explain how Linux file/folder permissions work. As that is a rather complicated question, I will defer you to the references below for further reading.

Goals

  • set default umask for non-root accounts to 0027
  • set default umask for the root account to 0077

Notes

  • umask is a Bash built-in which means a user can change their own umask setting.

References

Steps

Make a backup of files we'll be editing:

sudo cp --archive /etc/profile /etc/profile-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/bash.bashrc /etc/bash.bashrc-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/login.defs /etc/login.defs-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /root/.bashrc /root/.bashrc-COPY-$(date +"%Y%m%d%H%M%S")

Set default umask for non-root accounts to 0027 by adding this line to /etc/profile and /etc/bash.bashrc:

umask 0027

For the lazy:

echo -e "\numask 0027         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/profile /etc/bash.bashrc

We also need to add this line to /etc/login.defs:

UMASK 0027

For the lazy:

echo -e "\nUMASK 0027         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/login.defs

Set default umask for the root account to 0077 by adding this line to /root/.bashrc:

umask 0077

For the lazy:

echo -e "\numask 0077         # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /root/.bashrc

 

Orphaned Software

!! PROCEED AT YOUR OWN RISK !!

Why

As you use your system, and you install and uninstall software, you'll eventually end up with orphaned, or unused software/packages/libraries. You don't need to remove them, but if you don't need them, why keep them? When security is a priority, anything not explicitly needed is a potential security threat. You want to keep your server as trimmed and lean as possible.

Notes

  • Each distribution manages software/packages/libraries differently so how you find and remove orphaned packages will be different. So far I only have steps for Debian based systems.

Debian Based Systems

On Debian based systems, you can use deborphan to find orphaned packages.

Why Not

Keep in mind, deborphan finds packages that have no package dependencies. That does not mean they are not used. You could very well have a package you use every day that has no dependencies that you wouldn't want to remove. And, if deborphan gets anything wrong, then removing critical packages may break your system.

Steps

Install deborphan.

sudo apt install deborphan

Run deborphan as root to see a list of orphaned packages:

sudo deborphan
libxapian30
libpipeline1

Assuming you want to remove all of the packages deborphan finds, you can pass it's output to apt to remove them:

sudo apt --autoremove purge $(deborphan)


 

The Miscellaneous

The Simple way with MSMTP

(#msmtp-alternative)

Why

Well I will SIMPLIFY this method, to only output email using google mail account (and others). True Simple! :)

``` bash
#!/bin/bash
###### PLEASE .... EDIT IT...
USRMAIL="usernameemail"
DOMPROV="gmail.com"
PWDEMAIL="passwordStrong"  ## ATTENTION DONT USE Special Chars.. like as SPACE # and some others not all. Feel free to test ;)
MAILPROV="smtp.google.com:583"
MYMAIL="$USRMAIL@$DOMPROV"
USERLOC="root"
#######
apt install -y msmtp
    ln -s /usr/bin/msmtp /usr/sbin/sendmail
#wget http://www.cacert.org/revoke.crl -O /etc/ssl/certs/revoke.crl
#chmod 644 /etc/ssl/certs/revoke.crl
touch /root/.msmtprc
cat <<EOF> .msmtprc
defaults
account gmail
host $MAILPROV
port $MAILPORT
#proxy_host 127.0.0.1
#proxy_port 9001
from $MYEMAIL
timeout off 
protocol smtp
#auto_from [(on|off)]
#from envelope_from
#maildomain [domain]
auth on
user $USRMAIL
passwordeval "gpg -q --for-your-eyes-only --no-tty -d /root/msmtp-mail.gpg"
#passwordeval "gpg --quiet --for-your-eyes-only --no-tty --decrypt /root/msmtp-mail.gpg"
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
#tls_crl_file /etc/ssl/certs/revoke.crl
#tls_fingerprint [fingerprint]
#tls_key_file [file]
#tls_cert_file [file]
tls_certcheck on
tls_force_sslv3 on
tls_min_dh_prime_bits 512
#tls_priorities [priorities]
#dsn_notify (off|condition)
#dsn_return (off|amount)
#domain argument
#keepbcc off
logfile /var/log/mail.log
syslog on
account default : gmail
EOF
chmod 0400 /root/.msmtprc

   ## In testing .. auto command
# echo -e "1\n4096\n\ny\n$MYUSRMAIL\n$MYEMAIL\nmy key\nO\n$PWDMAIL\n$PWDMAIL\n" | gpg --full-generate-key 
##
gpg --full-generate-key
gpg --output revoke.asc --gen-revoke $MYEMAIL
echo -e "$PWDEMAIL\n" | gpg -e -o /root/msmtp-mail.gpg --recipient $MYEMAIL
echo "export GPG_TTY=\$(tty)" >> .baschrc    
chmod 400 msmtp-mail.gpg

echo "Hello there" | msmtp --debug $MYEMAIL
echo"######################
## MSMTP Configured ##
######################"
```

DONE!! ;)

Gmail and Exim4 As MTA With Implicit TLS

Why

Unless you're planning on setting up your own mail server, you'll need a way to send e-mails from your server. This will be important for system alerts/messages.

You can use any Gmail account. I recommend you create one specific for this server. That way if your server is compromised, the bad-actor won't have any passwords for your primary account. Granted, if you have 2FA/MFA enabled and you use an app password, there isn't much a bad-actor can do with just the app password, but why take the risk?

There are many guides on-line that cover how to configure Gmail as MTA using STARTTLS including a previous version of this guide. With STARTTLS, an initial unencrypted connection is made and then upgraded to an encrypted TLS or SSL connection. Instead, with the approach outlined below, an encrypted TLS connection is made from the start.

Also, as discussed in issue #29 and here, exim4 will fail for messages with long lines. We'll fix this in this section too.

Goals

  • mail configured to send e-mails from your server using Gmail
  • long line support for exim4

References

Steps

Install exim4. You will also need openssl and ca-certificates.

On Debian based systems:

sudo apt install exim4 openssl ca-certificates

Configure exim4:

For Debian based systems:

sudo dpkg-reconfigure exim4-config

You'll be prompted with some questions:

PromptAnswer
General type of mail configurationmail sent by smarthost; no local mail
System mail namelocalhost
IP-addresses to listen on for incoming SMTP connections127.0.0.1; ::1
Other destinations for which mail is accepted(default)
Visible domain name for local userslocalhost
IP address or host name of the outgoing smarthostsmtp.gmail.com::465
Keep number of DNS-queries minimal (Dial-on-Demand)?No
Split configuration into small files?No

Make a backup of /etc/exim4/passwd.client:

sudo cp --archive /etc/exim4/passwd.client /etc/exim4/passwd.client-COPY-$(date +"%Y%m%d%H%M%S")

Add a line like this to /etc/exim4/passwd.client

Notes:

  • Replace yourAccount@gmail.com and yourPassword with your details. If you have 2FA/MFA enabled on your Gmail then you'll need to create and use an app password here.
  • Always check host smtp.gmail.com for the most up-to-date domains to list.
smtp.gmail.com:yourAccount@gmail.com:yourPassword
*.google.com:yourAccount@gmail.com:yourPassword

This file has your Gmail password so we need to lock it down:

sudo chown root:Debian-exim /etc/exim4/passwd.client
sudo chmod 640 /etc/exim4/passwd.client

The next step is to create an TLS certificate that exim4 will use to make the encrypted connection to smtp.gmail.com. You can use your own certificate, like one from Let's Encrypt, or create one yourself using openssl. We will use a script that comes with exim4 that calls openssl to make our certificate:

sudo bash /usr/share/doc/exim4-base/examples/exim-gencert
[*] Creating a self signed SSL certificate for Exim!
    This may be sufficient to establish encrypted connections but for
    secure identification you need to buy a real certificate!

    Please enter the hostname of your MTA at the Common Name (CN) prompt!

Generating a RSA private key
..........................................+++++
................................................+++++
writing new private key to '/etc/exim4/exim.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Code (2 letters) [US]:[redacted]
State or Province Name (full name) []:[redacted]
Locality Name (eg, city) []:[redacted]
Organization Name (eg, company; recommended) []:[redacted]
Organizational Unit Name (eg, section) []:[redacted]
Server name (eg. ssl.domain.tld; required!!!) []:localhost
Email Address []:[redacted]
[*] Done generating self signed certificates for exim!
    Refer to the documentation and example configuration files
    over at /usr/share/doc/exim4-base/ for an idea on how to enable TLS
    support in your mail transfer agent.

Instruct exim4 to use TLS and port 465, and fix exim4's long lines issue, by creating the file /etc/exim4/exim4.conf.localmacros and adding:

MAIN_TLS_ENABLE = 1
REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS = *
TLS_ON_CONNECT_PORTS = 465
REQUIRE_PROTOCOL = smtps
IGNORE_SMTP_LINE_LENGTH_LIMIT = true

For the lazy:

cat << EOF | sudo tee /etc/exim4/exim4.conf.localmacros
MAIN_TLS_ENABLE = 1
REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS = *
TLS_ON_CONNECT_PORTS = 465
REQUIRE_PROTOCOL = smtps
IGNORE_SMTP_LINE_LENGTH_LIMIT = true
EOF

Make a backup of exim4's configuration file /etc/exim4/exim4.conf.template:

sudo cp --archive /etc/exim4/exim4.conf.template /etc/exim4/exim4.conf.template-COPY-$(date +"%Y%m%d%H%M%S")

Add the below to /etc/exim4/exim4.conf.template after the .ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS ... .endif block:

.ifdef REQUIRE_PROTOCOL
  protocol = REQUIRE_PROTOCOL
.endif
.ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS
  hosts_require_tls = REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS
.endif
.ifdef REQUIRE_PROTOCOL
    protocol = REQUIRE_PROTOCOL
.endif
.ifdef REMOTE_SMTP_HEADERS_REWRITE
  headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE
.endif

For the lazy:

sudo sed -i -r -e '/^.ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS$/I { :a; n; /^.endif$/!ba; a\# added by '"$(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")"'\n.ifdef REQUIRE_PROTOCOL\n    protocol = REQUIRE_PROTOCOL\n.endif\n# end add' -e '}' /etc/exim4/exim4.conf.template
.ifdef REQUIRE_PROTOCOL
  protocol = REQUIRE_PROTOCOL
.endif

Add the below to /etc/exim4/exim4.conf.template inside the .ifdef MAIN_TLS_ENABLE block:

.ifdef MAIN_TLS_ENABLE
.ifdef TLS_ON_CONNECT_PORTS
    tls_on_connect_ports = TLS_ON_CONNECT_PORTS
.endif

For the lazy:

sudo sed -i -r -e "/\.ifdef MAIN_TLS_ENABLE/ a # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n.ifdef TLS_ON_CONNECT_PORTS\n    tls_on_connect_ports = TLS_ON_CONNECT_PORTS\n.endif\n# end add" /etc/exim4/exim4.conf.template
.ifdef TLS_ON_CONNECT_PORTS
  tls_on_connect_ports = TLS_ON_CONNECT_PORTS
.endif

Update exim4 configuration to use TLS and then restart the service:

sudo update-exim4.conf
sudo service exim4 restart

If you're using UFW, you'll need to allow outbound traffic on 465. To do this we'll create a custom UFW application profile and then enable it. Create the file /etc/ufw/applications.d/smtptls, add this, then run ufw allow out smtptls comment 'open TLS port 465 for use with SMPT to send e-mails':

For the lazy:

cat << EOF | sudo tee /etc/ufw/applications.d/smtptls
[SMTPTLS]
title=SMTP through TLS
description=This opens up the TLS port 465 for use with SMPT to send e-mails.
ports=465/tcp
EOF

sudo ufw allow out smtptls comment 'open TLS port 465 for use with SMPT to send e-mails'
[SMTPTLS]
title=SMTP through TLS
description=This opens up the TLS port 465 for use with SMPT to send e-mails.
ports=465/tcp

Add some mail aliases so we can send e-mails to local accounts by adding lines like this to /etc/aliases:

You'll need to add all the local accounts that exist on your server.

user1: user1@gmail.com
user2: user2@gmail.com
...

Test your setup:

echo "test" | mail -s "Test" email@gmail.com
sudo tail /var/log/exim4/mainlog

Separate iptables Log File

Why

There will come a time when you'll need to look through your iptables logs. Having all the iptables logs go to their own file will make it a lot easier to find what you're looking for.

References

Steps

The first step is by telling your firewall to prefix all log entries with some unique string. If you're using iptables directly, you would do something like --log-prefix "[IPTABLES] " for all the rules. We took care of this in step step 4 of installing psad.

After you've added a prefix to the firewall logs, we need to tell rsyslog to send those lines to its own file. Do this by creating the file /etc/rsyslog.d/10-iptables.conf and adding this:

:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop

If you're expecting a lot if data being logged by your firewall, prefix the filename with a - "to omit syncing the file after every logging". For example:

:msg, contains, "[IPTABLES] " -/var/log/iptables.log
& stop

Note: Remember to change the prefix to whatever you use.

For the lazy:

cat << EOF | sudo tee /etc/rsyslog.d/10-iptables.conf
:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop
EOF
:msg, contains, "[IPTABLES] " -/var/log/iptables.log
& stop
:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop

Since we're logging firewall messages to a different file, we need to tell psad where the new file is. Edit /etc/psad/psad.conf and set IPT_SYSLOG_FILE to the path of the log file. For example:

Note: Remember to change the prefix to whatever you use.

For the lazy:

sudo sed -i -r -e "s/^(IPT_SYSLOG_FILE\s+)([^;]+)(;)$/# \1\2\3       # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n\1\/var\/log\/iptables.log\3       # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/psad/psad.conf 
IPT_SYSLOG_FILE /var/log/iptables.log;

Restart psad and rsyslog to activate the changes (or reboot):

sudo psad -R
sudo psad --sig-update
sudo psad -H
sudo service rsyslog restart

The last thing we have to do is tell logrotate to rotate the new log file so it doesn't get to big and fill up our disk. Create the file /etc/logrotate.d/iptables and add this:

For the lazy:

cat << EOF | sudo tee /etc/logrotate.d/iptables
/var/log/iptables.log
{
    rotate 7
    daily
    missingok
    notifempty
    delaycompress
    compress
    postrotate
        invoke-rc.d rsyslog rotate > /dev/null
    endscript
}
EOF
/var/log/iptables.log
{
    rotate 7
    daily
    missingok
    notifempty
    delaycompress
    compress
    postrotate
        invoke-rc.d rsyslog rotate > /dev/null
    endscript
}

(Table of Contents)

Left Over

Contacting Me

For any questions, comments, concerns, feedback, or issues, submit a new issue.

(Table of Contents)

Helpful Links

(Table of Contents)

Acknowledgments

 

Download Details: 
Author: imthenachoman
Source Code: https://github.com/imthenachoman/How-To-Secure-A-Linux-Server 
License: CC-BY-SA-4.0 License
#linux #security 

How To Secure A Linux Server