1648782000
An evolving how-to guide for securing a Linux server that, hopefully, also teaches you a little about security and why it matters.
This guides purpose is to teach you how to secure a Linux server.
There are a lot of things you can do to secure a Linux server and this guide will attempt to cover as many of them as possible. More topics/material will be added as I learn, or as folks contribute.
I assume you're using this guide because you, hopefully, already understand why good security is important. That is a heavy topic onto itself and breaking it down is out-of-scope for this guide. If you don't know the answer to that question, I advise you research it first.
At a high level, the second a device, like a server, is in the public domain -- i.e visible to the outside world -- it becomes a target for bad-actors. An unsecured device is a playground for bad-actors who want access to your data, or to use your server as another node for their large-scale DDOS attacks.
What's worse is, without good security, you may never know if your server has been compromised. A bad-actor may have gained unauthorized access to your server and copied your data without changing anything so you'd never know. Or your server may have been part of a DDOS attack and you wouldn't know. Look at many of the large scale data breaches in the news -- the companies often did not discover the data leak or intrusion until long after the bad-actors were gone.
Contrary to popular belief, bad-actors don't always want to change something or lock you out of your data for money. Sometimes they just want the data on your server for their data warehouses (there is big money in big data) or to covertly use your server for their nefarious purposes.
This guide may appear duplicative/unnecessary because there are countless articles online that tell you how to secure Linux, but the information is spread across different articles, that cover different things, and in different ways. Who has time to scour through hundreds of articles?
As I was going through research for my Debian build, I kept notes. At the end I realized that, along with what I already knew, and what I was learning, I had the makings of a how-to guide. I figured I'd put it online to hopefully help others learn, and save time.
I've never found one guide that covers everything -- this guide is my attempt.
Many of the things covered in this guide may be rather basic/trivial, but most of us do not install Linux every day and it is easy to forget those basic things.
IT automation tools like Ansible, Chef, Jenkins, Puppet, etc. help with the tedious task of installing/configuring a server but IMHO they are better suited for multiple or large scale deployments. IMHO, the overhead required to use those kinds of automation tools is wholly unnecessary for a one-time single server install for home use.
There are many guides provided by experts, industry leaders, and the distributions themselves. It is not practical, and sometimes against copyright, to include everything from those guides. I recommend you check them out before starting with this guide.
This guide...
There are many types of servers and different use-cases. While I want this guide to be as generic as possible, there will be some things that may not apply to all/other use-cases. Use your best judgement when going through this guide.
To help put context to many of the topics covered in this guide, my use-case/configuration is:
I am very lazy and do not like to edit files by hand if I don't need to. I also assume everyone else is just like me. :)
So, when and where possible, I have provided code
snippets to quickly do what is needed, like add or change a line in a configuration file.
The code
snippets use basic commands like echo
, cat
, sed
, awk
, and grep
. How the code
snippets work, like what each command/part does, is out of scope for this guide -- the man
pages are your friend.
Note: The code
snippets do not validate/verify the change went through -- i.e. the line was actually added or changed. I'll leave the verifying part in your capable hands. The steps in this guide do include taking backups of all files that will be changed.
Not all changes can be automated with code
snippets. Those changes need good, old fashioned, manual editing. For example, you can't just append a line to an INI type file. Use your favorite Linux text editor.
I wanted to put this guide on GitHub to make it easy to collaborate. The more folks that contribute, the better and more complete this guide will become.
To contribute you can fork and submit a pull request or submit a new issue.
Before you start you will want to identify what your Principles are. What is your threat model? Some things to think about:
These are just a few things to think about. Before you start securing your server you will want to understand what you're trying to protect against and why so you know what you need to do.
This guide is intended to be distribution agnostic so users can use any distribution they want. With that said, there are a few things to keep in mind:
You want a distribution that...
Installing Linux is out-of-scope for this guide because each distribution does it differently and the installation instructions are usually well documented. If you need help, start with your distribution's documentation. Regardless of the distribution, the high-level process usually goes like so:
Where applicable, use the expert install option so you have tighter control of what is running on your server. Only install what you absolutely need. I, personally, do not install anything other than SSH. Also, tick the Disk Encryption option.
sudo apt update && sudo apt upgrade
on Debian based systems)./etc/fstab
man
apt
commands that should work on all Debian based distributions. If someone is willing to provide the respective commands for other distributions, I will add them.It is highly advised you keep a 2nd terminal open to your server before you make and apply SSH configuration changes. This way if you lock yourself out of your 1st terminal session, you still have one session connected so you can fix it.
Thank you to Sonnenbrand for this idea.
Using SSH public/private keys is more secure than using a password. It also makes it easier and faster, to connect to our server because you don't have to enter a password.
Check the references below for more details but, at a high level, public/private keys work by using a pair of keys to verify identity.
For SSH, a public and private key is created on the client. You want to keep both keys secure, especially the private key. Even though the public key is meant to be public, it is wise to make sure neither keys fall in the wrong hands.
When you connect to an SSH server, SSH will look for a public key that matches the client you're connecting from in the file ~/.ssh/authorized_keys
on the server you're connecting to. Notice the file is in the home folder of the ID you're trying to connect to. So, after creating the public key, you need to append it to ~/.ssh/authorized_keys
. One approach is to copy it to a USB stick and physically transfer it to the server. Another approach is to use use ssh-copy-id
to transfer and append the public key.
After the keys have been created and the public key has been appended to ~/.ssh/authorized_keys
on the host, SSH uses the public and private keys to verify identity and then establish a secure connection. How identity is verified is a complicated process but Digital Ocean has a very nice write-up of how it works. At a high level, identity is verified by the server encrypting a challenge message with the public key, then sending it to the client. If the client cannot decrypt the challenge message with the private key, the identity can't be verified and a connection will not be established.
They are considered more secure because you need the private key to establish an SSH connection. If you set PasswordAuthentication no
in /etc/ssh/sshd_config
, then SSH won't let you connect without the private key.
You can also set a pass-phrase for the keys which would require you to enter the key pass-phrase when connecting using public/private keys. Keep in mind doing this means you can't use the key for automation because you'll have no way to send the passphrase in your scripts. ssh-agent
is a program that is shipped in many Linux distros (and usually already running) that will allow you to hold your unencrypted private key in memory for a configurable duration. Simply run ssh-add
and it will prompt you for your passphrase. You will not be prompted for your passphrase again until the configurable duration has passed.
We will be using Ed25519 keys which, according to https://linux-audit.com/:
It is using an elliptic curve signature scheme, which offers better security than ECDSA and DSA. At the same time, it also has good performance.
man ssh-keygen
man ssh-copy-id
man ssh-add
From the computer you're going to use to connect to your server, the client, not the server itself, create an Ed25519 key with ssh-keygen
:
ssh-keygen -t ed25519
Generating public/private ed25519 key pair. Enter file in which to save the key (/home/user/.ssh/id_ed25519): Created directory '/home/user/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_ed25519. Your public key has been saved in /home/user/.ssh/id_ed25519.pub. The key fingerprint is: SHA256:F44D4dr2zoHqgj0i2iVIHQ32uk/Lx4P+raayEAQjlcs user@client The key's randomart image is: +--[ED25519 256]--+ |xxxx x | |o.o +. . | | o o oo . | |. E oo . o . | | o o. o S o | |... .. o o | |.+....+ o | |+.=++o.B.. | |+..=**=o=. | +----[SHA256]-----+
Note: If you set a passphrase, you'll need to enter it every time you connect to your server using this key, unless you're using ssh-agent
.
Now you need to append the public key ~/.ssh/id_ed25519.pub
from your client to the ~/.ssh/authorized_keys
file on your server. Since we're presumable still at home on the LAN, we're probably safe from MIM attacks, so we will use ssh-copy-id
to transfer and append the public key:
ssh-copy-id user@server
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/user/.ssh/id_ed25519.pub" The authenticity of host 'host (192.168.1.96)' can't be established. ECDSA key fingerprint is SHA256:QaDQb/X0XyVlogh87sDXE7MR8YIK7ko4wS5hXjRySJE. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys user@host's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'user@host'" and check to make sure that only the key(s) you wanted were added.
Now would be a good time to perform any tasks specific to your setup.
To make it easy to control who can SSH to the server. By using a group, we can quickly add/remove accounts to the group to quickly allow or not allow SSH access to the server.
We will use the AllowGroups option in SSH's configuration file /etc/ssh/sshd_config
to tell the SSH server to only allow users to SSH in if they are a member of a certain UNIX group. Anyone not in the group will not be able to SSH in.
/etc/ssh/sshd_config
to limit who can SSH to the serverAllowGroup
setting set in Secure /etc/ssh/sshd_config
.man groupadd
man usermod
Create a group:
sudo groupadd sshusers
Add account(s) to the group:
sudo usermod -a -G sshusers user1
sudo usermod -a -G sshusers user2
sudo usermod -a -G sshusers ...
You'll need to do this for every account on your server that needs SSH access.
/etc/ssh/sshd_config
SSH is a door into your server. This is especially true if you are opening ports on your router so you can SSH to your server from outside your home network. If it is not secured properly, a bad-actor could use it to gain unauthorized access to your system.
/etc/ssh/sshd_config
is the default configuration file that the SSH server uses. We will use this file to tell what options the SSH server should use.
man sshd_config
Make a backup of OpenSSH server's configuration file /etc/ssh/sshd_config
and remove comments to make it easier to read:
sudo cp --archive /etc/ssh/sshd_config /etc/ssh/sshd_config-COPY-$(date +"%Y%m%d%H%M%S")
sudo sed -i -r -e '/^#|^$/ d' /etc/ssh/sshd_config
Edit /etc/ssh/sshd_config
then find and edit or add these settings that should be applied regardless of your configuration/setup:
Note: SSH does not like duplicate contradicting settings. For example, if you have ChallengeResponseAuthentication no
and then ChallengeResponseAuthentication yes
, SSH will respect the first one and ignore the second. Your /etc/ssh/sshd_config
file may already have some of the settings/lines below. To avoid issues you will need to manually go through your /etc/ssh/sshd_config
file and address any duplicate contradicting settings.
########################################################################################################
# start settings from https://infosec.mozilla.org/guidelines/openssh#modern-openssh-67 as of 2019-01-01
########################################################################################################
# Supported HostKey algorithms by order of preference.
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
# LogLevel VERBOSE logs user's key fingerprint on login. Needed to have a clear audit track of which key was using to log in.
LogLevel VERBOSE
# Use kernel sandbox mechanisms where possible in unprivileged processes
# Systrace on OpenBSD, Seccomp on Linux, seatbelt on MacOSX/Darwin, rlimit elsewhere.
# Note: This setting is deprecated in OpenSSH 7.5 (https://www.openssh.com/txt/release-7.5)
# UsePrivilegeSeparation sandbox
########################################################################################################
# end settings from https://infosec.mozilla.org/guidelines/openssh#modern-openssh-67 as of 2019-01-01
########################################################################################################
# don't let users set environment variables
PermitUserEnvironment no
# Log sftp level file access (read/write/etc.) that would not be easily logged otherwise.
Subsystem sftp internal-sftp -f AUTHPRIV -l INFO
# only use the newer, more secure protocol
Protocol 2
# disable X11 forwarding as X11 is very insecure
# you really shouldn't be running X on a server anyway
X11Forwarding no
# disable port forwarding
AllowTcpForwarding no
AllowStreamLocalForwarding no
GatewayPorts no
PermitTunnel no
# don't allow login if the account has an empty password
PermitEmptyPasswords no
# ignore .rhosts and .shosts
IgnoreRhosts yes
# verify hostname matches IP
UseDNS yes
Compression no
TCPKeepAlive no
AllowAgentForwarding no
PermitRootLogin no
# don't allow .rhosts or /etc/hosts.equiv
HostbasedAuthentication no
Then find and edit or add these settings, and set values as per your requirements:
Setting | Valid Values | Example | Description | Notes |
---|---|---|---|---|
AllowGroups | local UNIX group name | AllowGroups sshusers | group to allow SSH access to | |
ClientAliveCountMax | number | ClientAliveCountMax 0 | maximum number of client alive messages sent without response | |
ClientAliveInterval | number of seconds | ClientAliveInterval 300 | timeout in seconds before a response request | |
ListenAddress | space separated list of local addresses |
| local addresses sshd should listen on | See Issue #1 for important details. |
LoginGraceTime | number of seconds | LoginGraceTime 30 | time in seconds before login times-out | |
MaxAuthTries | number | MaxAuthTries 2 | maximum allowed attempts to login | |
MaxSessions | number | MaxSessions 2 | maximum number of open sessions | |
MaxStartups | number | MaxStartups 2 | maximum number of login sessions | |
PasswordAuthentication | yes or no | PasswordAuthentication no | if login with a password is allowed | |
Port | any open/available port number | Port 22 | port that sshd should listen on |
Check man sshd_config
for more details what these settings mean.
Make sure there are no duplicate settings that contradict each other. The below command should not have any output.
awk 'NF && $1!~/^(#|HostKey)/{print $1}' /etc/ssh/sshd_config | sort | uniq -c | grep -v ' 1 '
Restart ssh:
sudo service sshd restart
You can check verify the configurations worked with sshd -T
and verify the output:
sudo sshd -T
port 22 addressfamily any listenaddress [::]:22 listenaddress 0.0.0.0:22 usepam yes logingracetime 30 x11displayoffset 10 maxauthtries 2 maxsessions 2 clientaliveinterval 300 clientalivecountmax 0 streamlocalbindmask 0177 permitrootlogin no ignorerhosts yes ignoreuserknownhosts no hostbasedauthentication no ... subsystem sftp internal-sftp -f AUTHPRIV -l INFO maxstartups 2:30:2 permittunnel no ipqos lowdelay throughput rekeylimit 0 0 permitopen any
Per Mozilla's OpenSSH guidelines for OpenSSH 6.7+, "all Diffie-Hellman moduli in use should be at least 3072-bit-long".
The Diffie-Hellman algorithm is used by SSH to establish a secure connection. The larger the moduli (key size) the stronger the encryption.
man moduli
Make a backup of SSH's moduli file /etc/ssh/moduli
:
sudo cp --archive /etc/ssh/moduli /etc/ssh/moduli-COPY-$(date +"%Y%m%d%H%M%S")
Remove short moduli:
sudo awk '$5 >= 3071' /etc/ssh/moduli | sudo tee /etc/ssh/moduli.tmp
sudo mv /etc/ssh/moduli.tmp /etc/ssh/moduli
Even though SSH is a pretty good security guard for your doors and windows, it is still a visible door that bad-actors can see and try to brute-force in. Fail2ban will monitor for these brute-force attempts but there is no such thing as being too secure. Requiring two factors adds an extra layer of security.
Using Two Factor Authentication (2FA) / Multi Factor Authentication (MFA) requires anyone entering to have two keys to enter which makes it harder for bad actors. The two keys are:
Without both keys, they won't be able to get in.
Many folks might find the experience cumbersome or annoying. And, access to your system is dependent on the accompanying authenticator app that generates the code.
On Linux, PAM is responsible for authentication. There are four tasks to PAM that you can read about at https://en.wikipedia.org/wiki/Linux_PAM. This section talks about the authentication task.
When you log into a server, be it directly from the console or via SSH, the door you came through will send the request to the authentication task of PAM and PAM will ask for and verify your password. You can customize the rules each doors use. For example, you could have one set of rules when logging in directly from the console and another set of rules for when logging in via SSH.
This section will alter the authentication rules for when logging in via SSH to require both a password and a 6 digit code.
We will use Google's libpam-google-authenticator PAM module to create and verify a TOTP key. https://fastmail.blog/2016/07/22/how-totp-authenticator-apps-work/ and https://jemurai.com/2018/10/11/how-it-works-totp-based-mfa/ have very good writeups of how TOTP works.
What we will do is tell the server's SSH PAM configuration to ask the user for their password and then their numeric token. PAM will then verify the user's password and, if it is correct, then it will route the authentication request to libpam-google-authenticator which will ask for and verify your 6 digit token. If, and only if, everything is good will the authentication succeed and user be allowed to log in.
Install it libpam-google-authenticator.
On Debian based systems:
sudo apt install libpam-google-authenticator
Make sure you're logged in as the ID you want to enable 2FA/MFA for and execute google-authenticator
to create the necessary token data:
google-authenticator
Do you want authentication tokens to be time-based (y/n) y https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/user@host%3Fsecret%3DR4ZWX34FQKZROVX7AGLJ64684Y%26issuer%3Dhost ... Your new secret key is: R3NVX3FFQKZROVX7AGLJUGGESY Your verification code is 751419 Your emergency scratch codes are: 12345678 90123456 78901234 56789012 34567890 Do you want me to update your "/home/user/.google_authenticator" file (y/n) y Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) y By default, tokens are good for 30 seconds. In order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. If you experience problems with poor time synchronization, you can increase the window from its default size of +-1min (window size of 3) to about +-4min (window size of 17 acceptable tokens). Do you want to do so? (y/n) y If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s. Do you want to enable rate-limiting (y/n) y
Notice this is not run as root.
Select default option (y in most cases) for all the questions it asks and remember to save the emergency scratch codes.
Make a backup of PAM's SSH configuration file /etc/pam.d/sshd
:
sudo cp --archive /etc/pam.d/sshd /etc/pam.d/sshd-COPY-$(date +"%Y%m%d%H%M%S")
Now we need to enable it as an authentication method for SSH by adding this line to /etc/pam.d/sshd
:
auth required pam_google_authenticator.so nullok
Note: Check here for what nullok
means.
echo -e "\nauth required pam_google_authenticator.so nullok # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/pam.d/sshd
Tell SSH to leverage it by adding or editing this line in /etc/ssh/sshd_config
:
ChallengeResponseAuthentication yes
sudo sed -i -r -e "s/^(challengeresponseauthentication .*)$/# \1 # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/I" /etc/ssh/sshd_config
echo -e "\nChallengeResponseAuthentication yes # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/ssh/sshd_config
Restart ssh:
sudo service sshd restart
sudo lets accounts run commands as other accounts, including root. We want to make sure that only the accounts we want can use sudo.
sudo
does not require a password. Thanks to sbrl for sharing.Create a group:
sudo groupadd sudousers
Add account(s) to the group:
sudo usermod -a -G sudousers user1
sudo usermod -a -G sudousers user2
sudo usermod -a -G sudousers ...
You'll need to do this for every account on your server that needs sudo privileges.
Make a backup of the sudo's configuration file /etc/sudoers
:
sudo cp --archive /etc/sudoers /etc/sudoers-COPY-$(date +"%Y%m%d%H%M%S")
Edit sudo's configuration file /etc/sudoers
:
sudo visudo
Tell sudo to only allow users in the sudousers
group to use sudo by adding this line if it is not already there:
%sudousers ALL=(ALL:ALL) ALL
su also lets accounts run commands as other accounts, including root. We want to make sure that only the accounts we want can use su.
Create a group:
sudo groupadd suusers
Add account(s) to the group:
sudo usermod -a -G suusers user1
sudo usermod -a -G suusers user2
sudo usermod -a -G suusers ...
You'll need to do this for every account on your server that needs sudo privileges.
Make it so only users in this group can execute /bin/su
:
sudo dpkg-statoverride --update --add root suusers 4750 /bin/su
It's absolutely better, for many applications, to run in a sandbox.
Browsers (even more the Closed Source ones) and eMail Clients are highly suggested.
Install the software:
sudo apt install firejail firejail-profiles
Note: for Debian 10 Stable, official Backport is suggested:
sudo apt install -t buster-backports firejail firejail-profiles
Allow an application (installed in /usr/bin
or /bin
) to run only in a sandbox (see few examples below here):
sudo ln -s /usr/bin/firejail /usr/local/bin/google-chrome-stable
sudo ln -s /usr/bin/firejail /usr/local/bin/firefox
sudo ln -s /usr/bin/firejail /usr/local/bin/chromium
sudo ln -s /usr/bin/firejail /usr/local/bin/evolution
sudo ln -s /usr/bin/firejail /usr/local/bin/thunderbird
Run the application as usual (via terminal or launcher) and check if is running in a jail:
firejail --list
Allow a sandboxed app to run again as it was before (example: firefox)
sudo rm /usr/local/bin/firefox
Many security protocols leverage the time. If your system time is incorrect, it could have negative impacts to your server. An NTP client can solve that problem by keeping your system time in-sync with global NTP servers
NTP stands for Network Time Protocol. In the context of this guide, an NTP client on the server is used to update the server time with the official time pulled from official servers. Check https://www.pool.ntp.org/en/ for all of the public NTP servers.
Install ntp.
On Debian based systems:
sudo apt install ntp
Make a backup of the NTP client's configuration file /etc/ntp.conf
:
sudo cp --archive /etc/ntp.conf /etc/ntp.conf-COPY-$(date +"%Y%m%d%H%M%S")
The default configuration, at least on Debian, is already pretty secure. The only thing we'll want to make sure is we're the pool
directive and not any server
directives. The pool
directive allows the NTP client to stop using a server if it is unresponsive or serving bad time. Do this by commenting out all server
directives and adding the below to /etc/ntp.conf
.
pool pool.ntp.org iburst
sudo sed -i -r -e "s/^((server|pool).*)/# \1 # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/ntp.conf
echo -e "\npool pool.ntp.org iburst # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/ntp.conf
Example /etc/ntp.conf
:
driftfile /var/lib/ntp/ntp.drift statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable filegen peerstats file peerstats type day enable filegen clockstats file clockstats type day enable restrict -4 default kod notrap nomodify nopeer noquery limited restrict -6 default kod notrap nomodify nopeer noquery limited restrict 127.0.0.1 restrict ::1 restrict source notrap nomodify noquery pool pool.ntp.org iburst # added by user on 2019-03-09 @ 10:23:35
Restart ntp:
sudo service ntp restart
Check the status of the ntp service:
sudo systemctl status ntp
● ntp.service - LSB: Start NTP daemon Loaded: loaded (/etc/init.d/ntp; generated; vendor preset: enabled) Active: active (running) since Sat 2019-03-09 15:19:46 EST; 4s ago Docs: man:systemd-sysv-generator(8) Process: 1016 ExecStop=/etc/init.d/ntp stop (code=exited, status=0/SUCCESS) Process: 1028 ExecStart=/etc/init.d/ntp start (code=exited, status=0/SUCCESS) Tasks: 2 (limit: 4915) CGroup: /system.slice/ntp.service └─1038 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 108:113 Mar 09 15:19:46 host ntpd[1038]: Listen and drop on 0 v6wildcard [::]:123 Mar 09 15:19:46 host ntpd[1038]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 09 15:19:46 host ntpd[1038]: Listen normally on 2 lo 127.0.0.1:123 Mar 09 15:19:46 host ntpd[1038]: Listen normally on 3 enp0s3 10.10.20.96:123 Mar 09 15:19:46 host ntpd[1038]: Listen normally on 4 lo [::1]:123 Mar 09 15:19:46 host ntpd[1038]: Listen normally on 5 enp0s3 [fe80::a00:27ff:feb6:ed8e%2]:123 Mar 09 15:19:46 host ntpd[1038]: Listening on routing socket on fd #22 for interface updates Mar 09 15:19:47 host ntpd[1038]: Soliciting pool server 108.61.56.35 Mar 09 15:19:48 host ntpd[1038]: Soliciting pool server 69.89.207.199 Mar 09 15:19:49 host ntpd[1038]: Soliciting pool server 45.79.111.114
Check ntp's status:
sudo ntpq -p
remote refid st t when poll reach delay offset jitter ============================================================================== pool.ntp.org .POOL. 16 p - 64 0 0.000 0.000 0.000 *lithium.constan 198.30.92.2 2 u - 64 1 19.900 4.894 3.951 ntp2.wiktel.com 212.215.1.157 2 u 2 64 1 48.061 -0.431 0.104
To quote https://linux-audit.com/linux-system-hardening-adding-hidepid-to-proc/:
When looking in
/proc
you will discover a lot of files and directories. Many of them are just numbers, which represent the information about a particular process ID (PID). By default, Linux systems are deployed to allow all local users to see this all information. This includes process information from other users. This could include sensitive details that you may not want to share with other users. By applying some filesystem configuration tweaks, we can change this behavior and improve the security of the system.
Note: This may break on some systemd
systems. Please see https://github.com/imthenachoman/How-To-Secure-A-Linux-Server/issues/37 for more information. Thanks to nlgranger for sharing.
/proc
mounted with hidepid=2
so users can only see information about their processesMake a backup of /etc/fstab
:
sudo cp --archive /etc/fstab /etc/fstab-COPY-$(date +"%Y%m%d%H%M%S")
Add this line to /etc/fstab
to have /proc
mounted with hidepid=2
:
proc /proc proc defaults,hidepid=2 0 0
echo -e "\nproc /proc proc defaults,hidepid=2 0 0 # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/fstab
Reboot the system:
sudo reboot now
Note: Alternatively, you can remount /proc
without rebooting with sudo mount -o remount,hidepid=2 /proc
By default, accounts can use any password they want, including bad ones. pwquality/pam_pwquality addresses this security gap by providing "a way to configure the default password quality requirements for the system passwords" and checking "its strength against a system dictionary and a set of rules for identifying poor choices."
On Linux, PAM is responsible for authentication. There are four tasks to PAM that you can read about at https://en.wikipedia.org/wiki/Linux_PAM. This section talks about the password task.
When there is a need to set or change an account password, the password task of PAM handles the request. In this section we will tell PAM's password task to pass the requested new password to libpam-pwquality to make sure it meets our requirements. If the requirements are met it is used/set; if it does not meet the requirements it errors and lets the user know.
Install libpam-pwquality.
On Debian based systems:
sudo apt install libpam-pwquality
Make a backup of PAM's password configuration file /etc/pam.d/common-password
:
sudo cp --archive /etc/pam.d/common-password /etc/pam.d/common-password-COPY-$(date +"%Y%m%d%H%M%S")
Tell PAM to use libpam-pwquality to enforce strong passwords by editing the file /etc/pam.d/common-password
and change the line that starts like this:
password requisite pam_pwquality.so
to this:
password requisite pam_pwquality.so retry=3 minlen=10 difok=3 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1 maxrepeat=3 gecoschec
The above options are:
retry=3
= prompt user 3 times before returning with error.minlen=10
= the minimum length of the password, factoring in any credits (or debits) from these:dcredit=-1
= must have at least one digitucredit=-1
= must have at least one upper case letterlcredit=-1
= must have at least one lower case letterocredit=-1
= must have at least one non-alphanumeric characterdifok=3
= at least 3 characters from the new password cannot have been in the old passwordmaxrepeat=3
= allow a maximum of 3 repeated charactersgecoschec
= do not allow passwords with the account's namesudo sed -i -r -e "s/^(password\s+requisite\s+pam_pwquality.so)(.*)$/# \1\2 # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n\1 retry=3 minlen=10 difok=3 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1 maxrepeat=3 gecoschec # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/pam.d/common-password
It is important to keep a server updated with the latest critical security patches and updates. Otherwise you're at risk of known security vulnerabilities that bad-actors could use to gain unauthorized access to your server.
Unless you plan on checking your server every day, you'll want a way to automatically update the system and/or get emails about available updates.
You don't want to do all updates because with every update there is a risk of something breaking. It is important to do the critical updates but everything else can wait until you have time to do it manually.
Automatic and unattended updates may break your system and you may not be near your server to fix it. This would be especially problematic if it broke your SSH access.
How It Works
On Debian based systems you can use:
We will use unattended-upgrades to apply critical security patches. We can also apply stable updates since they've already been thoroughly tested by the Debian community.
References
/etc/apt/apt.conf.d/50unattended-upgrades
Steps
Install unattended-upgrades, apt-listchanges, and apticron:
sudo apt install unattended-upgrades apt-listchanges apticron
Now we need to configure unattended-upgrades to automatically apply the updates. This is typically done by editing the files /etc/apt/apt.conf.d/20auto-upgrades
and /etc/apt/apt.conf.d/50unattended-upgrades
that were created by the packages. However, because these file may get overwritten with a future update, we'll create a new file instead. Create the file /etc/apt/apt.conf.d/51myunattended-upgrades
and add this:
// Enable the update/upgrade script (0=disable)
APT::Periodic::Enable "1";
// Do "apt-get update" automatically every n-days (0=disable)
APT::Periodic::Update-Package-Lists "1";
// Do "apt-get upgrade --download-only" every n-days (0=disable)
APT::Periodic::Download-Upgradeable-Packages "1";
// Do "apt-get autoclean" every n-days (0=disable)
APT::Periodic::AutocleanInterval "7";
// Send report mail to root
// 0: no report (or null string)
// 1: progress report (actually any string)
// 2: + command outputs (remove -qq, remove 2>/dev/null, add -d)
// 3: + trace on APT::Periodic::Verbose "2";
APT::Periodic::Unattended-Upgrade "1";
// Automatically upgrade packages from these
Unattended-Upgrade::Origins-Pattern {
"o=Debian,a=stable";
"o=Debian,a=stable-updates";
"origin=Debian,codename=${distro_codename},label=Debian-Security";
};
// You can specify your own packages to NOT automatically upgrade here
Unattended-Upgrade::Package-Blacklist {
};
// Run dpkg --force-confold --configure -a if a unclean dpkg state is detected to true to ensure that updates get installed even when the system got interrupted during a previous run
Unattended-Upgrade::AutoFixInterruptedDpkg "true";
//Perform the upgrade when the machine is running because we wont be shutting our server down often
Unattended-Upgrade::InstallOnShutdown "false";
// Send an email to this address with information about the packages upgraded.
Unattended-Upgrade::Mail "root";
// Always send an e-mail
Unattended-Upgrade::MailOnlyOnError "false";
// Remove all unused dependencies after the upgrade has finished
Unattended-Upgrade::Remove-Unused-Dependencies "true";
// Remove any new unused dependencies after the upgrade has finished
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";
// Automatically reboot WITHOUT CONFIRMATION if the file /var/run/reboot-required is found after the upgrade.
Unattended-Upgrade::Automatic-Reboot "true";
// Automatically reboot even if users are logged in.
Unattended-Upgrade::Automatic-Reboot-WithUsers "true";
Notes:
/usr/lib/apt/apt.systemd.daily
for details on the APT::Periodic
optionsUnattended-Upgrade
optionsRun a dry-run of unattended-upgrades to make sure your configuration file is okay:
sudo unattended-upgrade -d --dry-run
If everything is okay, you can let it run whenever it's scheduled to or force a run with unattended-upgrade -d
.
Configure apt-listchanges to your liking:
sudo dpkg-reconfigure apt-listchanges
For apticron, the default settings are good enough but you can check them in /etc/apticron/apticron.conf
if you want to change them. For example, my configuration looks like this:
EMAIL="root" NOTIFY_NO_UPDATES="1"
WIP
WIP
WIP
Install rng-tools.
On Debian based systems:
sudo apt-get install rng-tools
Now we need to set the hardware device used to generate random numbers by adding this to /etc/default/rng-tools
:
HRNGDEVICE=/dev/urandom
echo "HRNGDEVICE=/dev/urandom" | sudo tee -a /etc/default/rng-tools
Restart the service:
sudo systemctl stop rng-tools.service
sudo systemctl start rng-tools.service
Test randomness:
Call me paranoid, and you don't have to agree, but I want to deny all traffic in and out of my server except what I explicitly allow. Why would my server be sending traffic out that I don't know about? And why would external traffic be trying to access my server if I don't know who or what it is? When it comes to good security, my opinion is to reject/deny by default, and allow by exception.
Of course, if you disagree, that is totally fine and can configure UFW to suit your needs.
Either way, ensuring that only traffic we explicitly allow is the job of a firewall.
The Linux kernel provides capabilities to monitor and control network traffic. These capabilities are exposed to the end-user through firewall utilities. On Linux, the most common firewall is iptables. However, iptables is rather complicated and confusing (IMHO). This is where UFW comes in. Think of UFW as a front-end to iptables. It simplifies the process of managing the iptables rules that tell the Linux kernel what to do with network traffic.
UFW works by letting you configure rules that:
You can create rules by explicitly specifying the ports or with application configurations that specify the ports.
Install ufw.
On Debian based systems:
sudo apt install ufw
Deny all outgoing traffic:
sudo ufw default deny outgoing comment 'deny all outgoing traffic'
Default outgoing policy changed to 'deny' (be sure to update your rules accordingly)
If you are not as paranoid as me, and don't want to deny all outgoing traffic, you can allow it instead:
sudo ufw default allow outgoing comment 'allow all outgoing traffic'
Deny all incoming traffic:
sudo ufw default deny incoming comment 'deny all incoming traffic'
Obviously we want SSH connections in:
sudo ufw limit in ssh comment 'allow SSH connections in'
Rules updated Rules updated (v6)
Allow additional traffic as per your needs. Some common use-cases:
# allow traffic out on port 53 -- DNS
sudo ufw allow out 53 comment 'allow DNS calls out'
# allow traffic out on port 123 -- NTP
sudo ufw allow out 123 comment 'allow NTP out'
# allow traffic out for HTTP, HTTPS, or FTP
# apt might needs these depending on which sources you're using
sudo ufw allow out http comment 'allow HTTP traffic out'
sudo ufw allow out https comment 'allow HTTPS traffic out'
sudo ufw allow out ftp comment 'allow FTP traffic out'
# allow whois
sudo ufw allow out whois comment 'allow whois'
# allow traffic out on port 68 -- the DHCP client
# you only need this if you're using DHCP
sudo ufw allow out 67 comment 'allow the DHCP client to update'
sudo ufw allow out 68 comment 'allow the DHCP client to update'
Start ufw:
sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y Firewall is active and enabled on system startup
If you want to see a status:
sudo ufw status
Status: active To Action From -- ------ ---- 22/tcp LIMIT Anywhere # allow SSH connections in 22/tcp (v6) LIMIT Anywhere (v6) # allow SSH connections in 53 ALLOW OUT Anywhere # allow DNS calls out 123 ALLOW OUT Anywhere # allow NTP out 80/tcp ALLOW OUT Anywhere # allow HTTP traffic out 443/tcp ALLOW OUT Anywhere # allow HTTPS traffic out 21/tcp ALLOW OUT Anywhere # allow FTP traffic out Mail submission ALLOW OUT Anywhere # allow mail out 43/tcp ALLOW OUT Anywhere # allow whois 53 (v6) ALLOW OUT Anywhere (v6) # allow DNS calls out 123 (v6) ALLOW OUT Anywhere (v6) # allow NTP out 80/tcp (v6) ALLOW OUT Anywhere (v6) # allow HTTP traffic out 443/tcp (v6) ALLOW OUT Anywhere (v6) # allow HTTPS traffic out 21/tcp (v6) ALLOW OUT Anywhere (v6) # allow FTP traffic out Mail submission (v6) ALLOW OUT Anywhere (v6) # allow mail out 43/tcp (v6) ALLOW OUT Anywhere (v6) # allow whois
or
sudo ufw status verbose
Status: active Logging: on (low) Default: deny (incoming), deny (outgoing), disabled (routed) New profiles: skip To Action From -- ------ ---- 22/tcp LIMIT IN Anywhere # allow SSH connections in 22/tcp (v6) LIMIT IN Anywhere (v6) # allow SSH connections in 53 ALLOW OUT Anywhere # allow DNS calls out 123 ALLOW OUT Anywhere # allow NTP out 80/tcp ALLOW OUT Anywhere # allow HTTP traffic out 443/tcp ALLOW OUT Anywhere # allow HTTPS traffic out 21/tcp ALLOW OUT Anywhere # allow FTP traffic out 587/tcp (Mail submission) ALLOW OUT Anywhere # allow mail out 43/tcp ALLOW OUT Anywhere # allow whois 53 (v6) ALLOW OUT Anywhere (v6) # allow DNS calls out 123 (v6) ALLOW OUT Anywhere (v6) # allow NTP out 80/tcp (v6) ALLOW OUT Anywhere (v6) # allow HTTP traffic out 443/tcp (v6) ALLOW OUT Anywhere (v6) # allow HTTPS traffic out 21/tcp (v6) ALLOW OUT Anywhere (v6) # allow FTP traffic out 587/tcp (Mail submission (v6)) ALLOW OUT Anywhere (v6) # allow mail out 43/tcp (v6) ALLOW OUT Anywhere (v6) # allow whois
ufw ships with some default applications. You can see them with:
sudo ufw app list
Available applications: AIM Bonjour CIFS DNS Deluge IMAP IMAPS IPP KTorrent Kerberos Admin Kerberos Full Kerberos KDC Kerberos Password LDAP LDAPS LPD MSN MSN SSL Mail submission NFS OpenSSH POP3 POP3S PeopleNearby SMTP SSH Socks Telnet Transmission Transparent Proxy VNC WWW WWW Cache WWW Full WWW Secure XMPP Yahoo qBittorrent svnserve
To get details about the app, like which ports it includes, type:
sudo ufw app info [app name]
sudo ufw app info DNS
Profile: DNS Title: Internet Domain Name Server Description: Internet Domain Name Server Port: 53
If you don't want to create rules by explicitly providing the port number(s), you can create your own application configurations. To do this, create a file in /etc/ufw/applications.d
.
For example, here is what you would use for Plex:
cat /etc/ufw/applications.d/plexmediaserver
[PlexMediaServer] title=Plex Media Server description=This opens up PlexMediaServer for http (32400), upnp, and autodiscovery. ports=32469/tcp|32413/udp|1900/udp|32400/tcp|32412/udp|32410/udp|32414/udp|32400/udp
Then you can enable it like any other app:
sudo ufw allow plexmediaserver
Even if you have a firewall to guard your doors, it is possible to try brute-forcing your way in any of the guarded doors. We want to monitor all network activity to detect potential intrusion attempts, such has repeated attempts to get in, and block them.
I can't explain it any better than user FINESEC from https://serverfault.com/ did at: https://serverfault.com/a/447604/289829.
Fail2BAN scans log files of various applications such as apache, ssh or ftp and automatically bans IPs that show the malicious signs such as automated login attempts. PSAD on the other hand scans iptables and ip6tables log messages (typically /var/log/messages) to detect and optionally block scans and other types of suspect traffic such as DDoS or OS fingerprinting attempts. It's ok to use both programs at the same time because they operate on different level.
And, since we're already using UFW so we'll follow the awesome instructions by netson at https://gist.github.com/netson/c45b2dc4e835761fbccc to make PSAD work with UFW.
psadwatchd
.Install psad.
On Debian based systems:
sudo apt install psad
Make a backup of psad's configuration file /etc/psad/psad.conf
:
sudo cp --archive /etc/psad/psad.conf /etc/psad/psad.conf-COPY-$(date +"%Y%m%d%H%M%S")
Review and update configuration options in /etc/psad/psad.conf
. Pay special attention to these:
Setting | Set To |
---|---|
EMAIL_ADDRESSES | your email address(s) |
HOSTNAME | your server's hostname |
ENABLE_PSADWATCHD | ENABLE_PSADWATCHD Y; |
ENABLE_AUTO_IDS | ENABLE_AUTO_IDS Y; |
ENABLE_AUTO_IDS_EMAILS | ENABLE_AUTO_IDS_EMAILS Y; |
EXPECT_TCP_OPTIONS | EXPECT_TCP_OPTIONS Y; |
Check the configuration file psad's documentation at http://www.cipherdyne.org/psad/docs/config.html for more details.
Now we need to make some changes to ufw so it works with psad by telling ufw to log all traffic so psad can analyze it. Do this by editing two files and adding these lines at the end but before the COMMIT line.
Make backups:
sudo cp --archive /etc/ufw/before.rules /etc/ufw/before.rules-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/ufw/before6.rules /etc/ufw/before6.rules-COPY-$(date +"%Y%m%d%H%M%S")
Edit the files:
/etc/ufw/before.rules
/etc/ufw/before6.rules
Now we need to reload/restart ufw and psad for the changes to take effect:
sudo ufw reload
sudo psad -R
sudo psad --sig-update
sudo psad -H
Analyze iptables rules for errors:
sudo psad --fw-analyze
[+] Parsing INPUT chain rules. [+] Parsing INPUT chain rules. [+] Firewall config looks good. [+] Completed check of firewall ruleset. [+] Results in /var/log/psad/fw_check [+] Exiting.
Note: If there were any issues you will get an e-mail with the error.
Check the status of psad:
sudo psad --Status
[-] psad: pid file /var/run/psad/psadwatchd.pid does not exist for psadwatchd on vm [+] psad_fw_read (pid: 3444) %CPU: 0.0 %MEM: 2.2 Running since: Sat Feb 16 01:03:09 2019 [+] psad (pid: 3435) %CPU: 0.2 %MEM: 2.7 Running since: Sat Feb 16 01:03:09 2019 Command line arguments: [none specified] Alert email address(es): root@localhost [+] Version: psad v2.4.3 [+] Top 50 signature matches: [NONE] [+] Top 25 attackers: [NONE] [+] Top 20 scanned ports: [NONE] [+] iptables log prefix counters: [NONE] Total protocol packet counters: [+] IP Status Detail: [NONE] Total scan sources: 0 Total scan destinations: 0 [+] These results are available in: /var/log/psad/status.out
UFW tells your server what doors to board up so nobody can see them, and what doors to allow authorized users through. PSAD monitors network activity to detect and prevent potential intrusions -- repeated attempts to get in.
But what about the applications/services your server is running, like SSH and Apache, where your firewall is configured to allow access in. Even though access may be allowed that doesn't mean all access attempts are valid and harmless. What if someone tries to brute-force their way in to a web-app you're running on your server? This is where Fail2ban comes in.
Fail2ban monitors the logs of your applications (like SSH and Apache) to detect and prevent potential intrusions. It will monitor network traffic/logs and prevent intrusions by blocking suspicious activity (e.g. multiple successive failed connections in a short time-span).
Install fail2ban.
On Debian based systems:
sudo apt install fail2ban
We don't want to edit /etc/fail2ban/fail2ban.conf
or /etc/fail2ban/jail.conf
because a future update may overwrite those so we'll create a local copy instead. Create the file /etc/fail2ban/jail.local
and add this to it after replacing [LAN SEGMENT]
and [your email]
with the appropriate values:
[DEFAULT]
# the IP address range we want to ignore
ignoreip = 127.0.0.1/8 [LAN SEGMENT]
# who to send e-mail to
destemail = [your e-mail]
# who is the email from
sender = [your e-mail]
# since we're using exim4 to send emails
mta = mail
# get email alerts
action = %(action_mwl)s
Note: Your server will need to be able to send e-mails so Fail2ban can let you know of suspicious activity and when it banned an IP.
We need to create a jail for SSH that tells fail2ban to look at SSH logs and use ufw to ban/unban IPs as needed. Create a jail for SSH by creating the file /etc/fail2ban/jail.d/ssh.local
and adding this to it:
[sshd]
enabled = true
banaction = ufw
port = ssh
filter = sshd
logpath = %(sshd_log)s
maxretry = 5
cat << EOF | sudo tee /etc/fail2ban/jail.d/ssh.local
[sshd]
enabled = true
banaction = ufw
port = ssh
filter = sshd
logpath = %(sshd_log)s
maxretry = 5
EOF
In the above we tell fail2ban to use the ufw as the banaction
. Fail2ban ships with an action configuration file for ufw. You can see it in /etc/fail2ban/action.d/ufw.conf
Enable fail2ban:
sudo fail2ban-client start
sudo fail2ban-client reload
sudo fail2ban-client add sshd # This may fail on some systems if the sshd jail was added by default
To check the status:
sudo fail2ban-client status
Status |- Number of jail: 1 `- Jail list: sshd
sudo fail2ban-client status sshd
Status for the jail: sshd |- Filter | |- Currently failed: 0 | |- Total failed: 0 | `- File list: /var/log/auth.log `- Actions |- Currently banned: 0 |- Total banned: 0 `- Banned IP list:
I have not needed to create a custom jail yet. Once I do, and I figure out how, I will update this guide. Or, if you know how please help contribute.
To unban an IP use this command:
fail2ban-client set [jail] unbanip [IP]
[jail]
is the name of the jail that has the banned IP and [IP]
is the IP address you want to unban. For example, to unaban 192.168.1.100
from SSH you would do:
fail2ban-client set sshd unbanip 192.168.1.100
WIP
WIP
WIP
Install AIDE.
On Debian based systems:
sudo apt install aide
Make a backup of AIDE's defaults file:
sudo cp -p /etc/default/aide /etc/default/aide-COPY-$(date +"%Y%m%d%H%M%S")
Go through /etc/default/aide
and set AIDE's defaults per your requirements. If you want AIDE to run daily and e-mail you, be sure to set CRON_DAILY_RUN
to yes
.
Make a backup of AIDE's configuration files:
sudo cp -pr /etc/aide /etc/aide-COPY-$(date +"%Y%m%d%H%M%S")
On Debian based systems:
/etc/aide/aide.conf.d/
./etc/aide/aide.conf
or /etc/aide/aide.conf.d/
.sudo cp -pr /etc/aide /etc/aide-COPY-$(date +"%Y%m%d%H%M%S")
.Create a new database, and install it.
On Debian based systems:
sudo aideinit
Running aide --init... Start timestamp: 2019-04-01 21:23:37 -0400 (AIDE 0.16) AIDE initialized database at /var/lib/aide/aide.db.new Verbose level: 6 Number of entries: 25973 --------------------------------------------------- The attributes of the (uncompressed) database(s): --------------------------------------------------- /var/lib/aide/aide.db.new RMD160 : moyQ1YskQQbidX+Lusv3g2wf1gQ= TIGER : 7WoOgCrXzSpDrlO6I3PyXPj1gRiaMSeo SHA256 : gVx8Fp7r3800WF2aeXl+/KHCzfGsNi7O g16VTPpIfYQ= SHA512 : GYfa0DJwWgMLl4Goo5VFVOhu4BphXCo3 rZnk49PYztwu50XjaAvsVuTjJY5uIYrG tV+jt3ELvwFzGefq4ZBNMg== CRC32 : /cusZw== HAVAL : E/i5ceF3YTjwenBfyxHEsy9Kzu35VTf7 CPGQSW4tl14= GOST : n5Ityzxey9/1jIs7LMc08SULF1sLBFUc aMv7Oby604A= End timestamp: 2019-04-01 21:24:45 -0400 (run time: 1m 8s)
Test everything works with no changes.
On Debian based systems:
sudo aide.wrapper --check
Start timestamp: 2019-04-01 21:24:45 -0400 (AIDE 0.16) AIDE found NO differences between database and filesystem. Looks okay!! Verbose level: 6 Number of entries: 25973 --------------------------------------------------- The attributes of the (uncompressed) database(s): --------------------------------------------------- /var/lib/aide/aide.db RMD160 : moyQ1YskQQbidX+Lusv3g2wf1gQ= TIGER : 7WoOgCrXzSpDrlO6I3PyXPj1gRiaMSeo SHA256 : gVx8Fp7r3800WF2aeXl+/KHCzfGsNi7O g16VTPpIfYQ= SHA512 : GYfa0DJwWgMLl4Goo5VFVOhu4BphXCo3 rZnk49PYztwu50XjaAvsVuTjJY5uIYrG tV+jt3ELvwFzGefq4ZBNMg== CRC32 : /cusZw== HAVAL : E/i5ceF3YTjwenBfyxHEsy9Kzu35VTf7 CPGQSW4tl14= GOST : n5Ityzxey9/1jIs7LMc08SULF1sLBFUc aMv7Oby604A= End timestamp: 2019-04-01 21:26:03 -0400 (run time: 1m 18s)
Test everything works after making some changes.
On Debian based systems:
sudo touch /etc/test.sh
sudo touch /root/test.sh
sudo aide.wrapper --check
sudo rm /etc/test.sh
sudo rm /root/test.sh
sudo aideinit -y -f
Start timestamp: 2019-04-01 21:37:37 -0400 (AIDE 0.16) AIDE found differences between database and filesystem!! Verbose level: 6 Summary: Total number of entries: 25972 Added entries: 2 Removed entries: 0 Changed entries: 1 --------------------------------------------------- Added entries: --------------------------------------------------- f++++++++++++++++: /etc/test.sh f++++++++++++++++: /root/test.sh --------------------------------------------------- Changed entries: --------------------------------------------------- d =.... mc.. .. .: /root --------------------------------------------------- Detailed information about changes: --------------------------------------------------- Directory: /root Mtime : 2019-04-01 21:35:07 -0400 | 2019-04-01 21:37:36 -0400 Ctime : 2019-04-01 21:35:07 -0400 | 2019-04-01 21:37:36 -0400 --------------------------------------------------- The attributes of the (uncompressed) database(s): --------------------------------------------------- /var/lib/aide/aide.db RMD160 : qF9WmKaf2PptjKnhcr9z4ueCPTY= TIGER : zMo7MvvYJcq1hzvTQLPMW7ALeFiyEqv+ SHA256 : LSLLVjjV6r8vlSxlbAbbEsPcQUB48SgP pdVqEn6ZNbQ= SHA512 : Qc4U7+ZAWCcitapGhJ1IrXCLGCf1IKZl 02KYL1gaZ0Fm4dc7xLqjiquWDMSEbwzW oz49NCquqGz5jpMIUy7UxA== CRC32 : z8ChEA== HAVAL : YapzS+/cdDwLj3kHJEq8fufLp3DPKZDg U12KCSkrO7Y= GOST : 74sLV4HkTig+GJhokvxZQm7CJD/NR0mG 6jV7zdt5AXQ= End timestamp: 2019-04-01 21:38:50 -0400 (run time: 1m 13s)
That's it. If you set CRON_DAILY_RUN
to yes
in /etc/default/aide
then cron will execute /etc/cron.daily/aide
every day and e-mail you the output.
Every time you make changes to files/folders that AIDE monitors, you will need to update the database to capture those changes. To do that on Debian based systems:
sudo aideinit -y -f
WIP
clamd
process running to make scanning fasterWIP
clamd
is running all the time. clamd
is only if you're running a mail server and does not provide real-time monitoring of files. Instead, you'd want to scan files manually or on a schedule.Install ClamAV.
On Debian based systems:
sudo apt install clamav clamav-freshclam clamav-daemon
Make a backup of clamav-freshclam
's configuration file /etc/clamav/freshclam.conf
:
sudo cp --archive /etc/clamav/freshclam.conf /etc/clamav/freshclam.conf-COPY-$(date +"%Y%m%d%H%M%S")
clamav-freshclam
's default settings are probably good enough but if you want to change them, you can either edit the file /etc/clamav/freshclam.conf
or use dpkg-reconfigure
:
sudo dpkg-reconfigure clamav-freshclam
Note: The default settings will update the definitions 24 times in a day. To change the interval, check the Checks
setting in /etc/clamav/freshclam.conf
or use dpkg-reconfigure
.
Start the clamav-freshclam
service:
sudo service clamav-freshclam start
You can make sure clamav-freshclam
running:
sudo service clamav-freshclam status
● clamav-freshclam.service - ClamAV virus database updater Loaded: loaded (/lib/systemd/system/clamav-freshclam.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2019-03-16 22:57:07 EDT; 2min 13s ago Docs: man:freshclam(1) man:freshclam.conf(5) https://www.clamav.net/documents Main PID: 1288 (freshclam) CGroup: /system.slice/clamav-freshclam.service └─1288 /usr/bin/freshclam -d --foreground=true Mar 16 22:57:08 host freshclam[1288]: Sat Mar 16 22:57:08 2019 -> ^Local version: 0.100.2 Recommended version: 0.101.1 Mar 16 22:57:08 host freshclam[1288]: Sat Mar 16 22:57:08 2019 -> DON'T PANIC! Read https://www.clamav.net/documents/upgrading-clamav Mar 16 22:57:15 host freshclam[1288]: Sat Mar 16 22:57:15 2019 -> Downloading main.cvd [100%] Mar 16 22:57:38 host freshclam[1288]: Sat Mar 16 22:57:38 2019 -> main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr) Mar 16 22:57:40 host freshclam[1288]: Sat Mar 16 22:57:40 2019 -> Downloading daily.cvd [100%] Mar 16 22:58:13 host freshclam[1288]: Sat Mar 16 22:58:13 2019 -> daily.cvd updated (version: 25390, sigs: 1520006, f-level: 63, builder: raynman) Mar 16 22:58:14 host freshclam[1288]: Sat Mar 16 22:58:14 2019 -> Downloading bytecode.cvd [100%] Mar 16 22:58:16 host freshclam[1288]: Sat Mar 16 22:58:16 2019 -> bytecode.cvd updated (version: 328, sigs: 94, f-level: 63, builder: neo) Mar 16 22:58:24 host freshclam[1288]: Sat Mar 16 22:58:24 2019 -> Database updated (6086349 signatures) from db.local.clamav.net (IP: 104.16.219.84) Mar 16 22:58:24 host freshclam[1288]: Sat Mar 16 22:58:24 2019 -> ^Clamd was NOT notified: Can't connect to clamd through /var/run/clamav/clamd.ctl: No such file or directory
Note: Don't worry about that Local version
line. Check https://serverfault.com/questions/741299/is-there-a-way-to-keep-clamav-updated-on-debian-8 for more details.
Make a backup of clamav-daemon
's configuration file /etc/clamav/clamd.conf
:
sudo cp --archive /etc/clamav/clamd.conf /etc/clamav/clamd.conf-COPY-$(date +"%Y%m%d%H%M%S")
You can change clamav-daemon
's settings by editing the file /etc/clamav/clamd.conf
or useing dpkg-reconfigure
:
sudo dpkg-reconfigure clamav-daemon
clamscan
program.clamscan
runs as the user it is executed as so it needs read permissions to the files/folders it is scanning.clamscan
as root
is dangerous because if a file is in fact a virus there is risk that it could use the root privileges.clamscan /path/to/file
.clamscan -r /path/to/folder
.-i
switch to only print infected files.clamscan
's man
pages for other switches/options.WIP
WIP
WIP
Install Rkhunter.
On Debian based systems:
sudo apt install rkhunter
Make a backup of rkhunter' defaults file:
sudo cp -p /etc/default/rkhunter /etc/default/rkhunter-COPY-$(date +"%Y%m%d%H%M%S")
rkhunter's configuration file is /etc/rkhunter.conf
. Instead of making changes to it, create and use the file /etc/rkhunter.conf.local
instead:
sudo cp -p /etc/rkhunter.conf /etc/rkhunter.conf.local
Go through the configuration file /etc/rkhunter.conf.local
and set to your requirements. My recommendations:
Setting | Note |
---|---|
UPDATE_MIRRORS=1 | |
MIRRORS_MODE=0 | |
MAIL-ON-WARNING=root | |
COPY_LOG_ON_ERROR=1 | to save a copy of the log if there is an error |
PKGMGR=... | set to the appropriate value per the documentation |
PHALANX2_DIRTEST=1 | read the documentation for why |
WEB_CMD="" | this is to address an issue with the Debian package that disables the ability for rkhunter to self-update. |
USE_LOCKING=1 | to prevent issues with rkhunter running multiple times |
SHOW_SUMMARY_WARNINGS_NUMBER=1 | to see the actual number of warnings found |
You want rkhunter to run every day and e-mail you the result. You can write your own script or check https://www.tecmint.com/install-rootkit-hunter-scan-for-rootkits-backdoors-in-linux/ for a sample cron script you can use.
On Debian based system, rkhunter comes with cron scripts. To enable them check /etc/default/rkhunter
or use dpkg-reconfigure
and say Yes
to all of the questions:
sudo dpkg-reconfigure rkhunter
After you've finished with all of the changes, make sure all the settings are valid:
sudo rkhunter -C
Update rkhunter and its database:
sudo rkhunter --versioncheck
sudo rkhunter --update
sudo rkhunter --propupd
If you want to do a manual scan and see the output:
sudo rkhunter --check
WIP
WIP
WIP
Install chkrootkit.
On Debian based systems:
sudo apt install chkrootkit
Do a manual scan:
sudo chkrootkit
ROOTDIR is `/' Checking `amd'... not found Checking `basename'... not infected Checking `biff'... not found Checking `chfn'... not infected Checking `chsh'... not infected ... Checking `scalper'... not infected Checking `slapper'... not infected Checking `z2'... chklastlog: nothing deleted Checking `chkutmp'... chkutmp: nothing deleted Checking `OSX_RSPLUG'... not infected
Make a backup of chkrootkit's configuration file /etc/chkrootkit.conf
:
sudo cp --archive /etc/chkrootkit.conf /etc/chkrootkit.conf-COPY-$(date +"%Y%m%d%H%M%S")
You want chkrootkit to run every day and e-mail you the result.
On Debian based system, chkrootkit comes with cron scripts. To enable them check /etc/chkrootkit.conf
or use dpkg-reconfigure
and say Yes
to the first question:
sudo dpkg-reconfigure chkrootkit
Your server will be generating a lot of logs that may contain important information. Unless you plan on checking your server everyday, you'll want a way to get e-mail summary of your server's logs. To accomplish this we'll use logwatch.
logwatch scans system log files and summarizes them. You can run it directly from the command line or schedule it to run on a recurring schedule. logwatch uses service files to know how to read/summarize a log file. You can see all of the stock service files in /usr/share/logwatch/scripts/services
.
logwatch's configuration file /usr/share/logwatch/default.conf/logwatch.conf
specifies default options. You can override them via command line arguments.
range
option to cover your recurrence window. See https://www.badpenguin.org/configure-logwatch-for-weekly-email-and-html-output-format for an example.Install logwatch.
On Debian based systems:
sudo apt install logwatch
To see a sample of what logwatch collects you can run it directly:
sudo /usr/sbin/logwatch --output stdout --format text --range yesterday --service all
################### Logwatch 7.4.3 (12/07/16) #################### Processing Initiated: Mon Mar 4 00:05:50 2019 Date Range Processed: yesterday ( 2019-Mar-03 ) Period is day. Detail Level of Output: 5 Type of Output/Format: stdout / text Logfiles for Host: host ################################################################## --------------------- Cron Begin ------------------------ ... ... ---------------------- Disk Space End ------------------------- ###################### Logwatch End #########################
Go through logwatch's self-documented configuration file /usr/share/logwatch/default.conf/logwatch.conf
before continuing. There is no need to change anything here but pay special attention to the Output
, Format
, MailTo
, Range
, and Service
as those are the ones we'll be using. For our purposes, instead of specifying our options in the configuration file, we will pass them as command line arguments in the daily cron job that executes logwatch. That way, if the configuration file is ever modified (e.g. during an update), our options will still be there.
Make a backup of logwatch's daily cron file /etc/cron.daily/00logwatch
and unset the execute bit:
sudo cp --archive /etc/cron.daily/00logwatch /etc/cron.daily/00logwatch-COPY-$(date +"%Y%m%d%H%M%S")
sudo chmod -x /etc/cron.daily/00logwatch-COPY*
By default, logwatch outputs to stdout
. Since the goal is to get a daily e-mail, we need to change the output type that logwatch uses to send e-mail instead. We could do this through the configuration file above, but that would apply to every time it is run -- even when we run it manually and want to see the output to the screen. Instead, we'll change the cron job that executes logwatch to send e-mail. This way, when run manually, we'll still get output to stdout
and when run by cron, it'll send an e-mail. We'll also make sure it checks for all services, and change the output format to html so it's easier to read regardless of what the configuration file says. In the file /etc/cron.daily/00logwatch
find the execute line and change it to:
/usr/sbin/logwatch --output mail --format html --mailto root --range yesterday --service all
#!/bin/bash #Check if removed-but-not-purged test -x /usr/share/logwatch/scripts/logwatch.pl || exit 0 #execute /usr/sbin/logwatch --output mail --format html --mailto root --range yesterday --service all #Note: It's possible to force the recipient in above command #Just pass --mailto address@a.com instead of --output mail
sudo sed -i -r -e "s,^($(sudo which logwatch).*?),# \1 # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n$(sudo which logwatch) --output mail --format html --mailto root --range yesterday --service all # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")," /etc/cron.daily/00logwatch
You can test the cron job by executing it:
sudo /etc/cron.daily/00logwatch
Note: If logwatch fails to deliver mail due to the e-mail having long lines please check https://blog.dhampir.no/content/exim4-line-length-in-debian-stretch-mail-delivery-failed-returning-message-to-sender as documented in issue #29. If you you followed Gmail and Exim4 As MTA With Implicit TLS then we already took care of this in step #7.
Ports are how applications, services, and processes communicate with each other -- either locally within your server or with other devices on the network. When you have an application or service (like SSH or Apache) running on your server, they listen for requests on specific ports.
Obviously we don't want your server listening on ports we don't know about. We'll use ss
to see all the ports that services are listening on. This will help us track down and stop rogue, potentially dangerous, services.
man ss
To see the all the ports listening for traffic:
sudo ss -lntup
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port udp UNCONN 0 0 *:68 *:* users:(("dhclient",pid=389,fd=6)) tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=4390,fd=3)) tcp LISTEN 0 128 :::22 :::* users:(("sshd",pid=4390,fd=4))
Switch Explanations:
l
= display listening socketsn
= do now try to resolve service namest
= display TCP socketsu
= display UDP socketsp
= show process informationIf you see anything suspicious, like a port you're not aware of or a process you don't know, investigate and remediate as necessary.
From https://cisofy.com/lynis/:
Lynis is a battle-tested security tool for systems running Linux, macOS, or Unix-based operating system. It performs an extensive health scan of your systems to support system hardening and compliance testing.
Install lynis. https://cisofy.com/lynis/#installation has detailed instructions on how to install it for your distribution.
On Debian based systems, using CISOFY's community software repository:
sudo apt install apt-transport-https ca-certificates host
sudo wget -O - https://packages.cisofy.com/keys/cisofy-software-public.key | sudo apt-key add -
sudo echo "deb https://packages.cisofy.com/community/lynis/deb/ stable main" | sudo tee /etc/apt/sources.list.d/cisofy-lynis.list
sudo apt update
sudo apt install lynis host
Update it:
sudo lynis update info
Run a security audit:
sudo lynis audit system
This will scan your server, report its audit findings, and at the end it will give you suggestions. Spend some time going through the output and address gaps as necessary.
From https://github.com/ossec/ossec-hids
OSSEC is a full platform to monitor and control your systems. It mixes together all the aspects of HIDS (host-based intrusion detection), log monitoring and SIM/SIEM together in a simple, powerful and open source solution.
Install OSSEC-HIDS from sources
sudo apt install libz-dev libssl-dev libpcre2-dev build-essential
wget https://github.com/ossec/ossec-hids/archive/3.6.0.tar.gz
tar xzf 3.6.0.tar.gz
cd ossec-hids-3.6.0/
sudo ./install.sh
Useful commands:
Agent information
sudo /var/ossec/bin/agent_control -i <AGENT_ID>
AGENT_ID
by default is 000
, to be sure the command sudo /var/ossec/bin/agent_control -l
can be used.
Run integrity/rootkit checking
OSSEC by default run rootkit check each 2 hours.
sudo /var/ossec/bin/agent_control -u <AGENT_ID> -r
Alerts
tail -f /var/ossec/logs/alerts/alerts.log
sudo cat /var/ossec/logs/alerts/alerts.log | grep -A4 -i integrity
sudo cat /var/ossec/logs/alerts/alerts.log | grep -A4 "rootcheck,"
This sections cover things that are high risk because there is a possibility they can make your system unusable, or are considered unnecessary by many because the risks outweigh any rewards.
!! PROCEED AT YOUR OWN RISK !!
!! PROCEED AT YOUR OWN RISK !!
!! PROCEED AT YOUR OWN RISK !!
The kernel is the brains of a Linux system. Securing it just makes sense.
Changing kernel settings with sysctl is risky and could break your server. If you don't know what you are doing, don't have the time to debug issues, or just don't want to take the risks, I would advise from not following these steps.
I am not as knowledgeable about hardening/securing a Linux kernel as I'd like. As much as I hate to admit it, I do not know what all of these settings do. My understanding is that most of them are general kernel hardening and performance, and the others are to protect against spoofing and DOS attacks.
In fact, since I am not 100% sure exactly what each setting does, I took recommended settings from numerous sites (all linked in the references below) and combined them to figure out what should be set. I figure if multiple reputable sites mention the same setting, it's probably safe.
If you have a better understanding of what these settings do, or have any other feedback/advice on them, please let me know.
I won't provide For the lazy code in this section.
The sysctl settings can be found in the linux-kernel-sysctl-hardening.md file in this repo.
Before you make a kernel sysctl change permanent, you can test it with the sysctl command:
sudo sysctl -w [key=value]
Example:
sudo sysctl -w kernel.ctrl-alt-del=0
Note: There are no spaces in key=value
, including before and after the space.
Once you have tested a setting, and made sure it works without breaking your server, you can make it permanent by adding the values to /etc/sysctl.conf
. For example:
$ sudo cat /etc/sysctl.conf
kernel.ctrl-alt-del = 0
fs.file-max = 65535
...
kernel.sysrq = 0
After updating the file you can reload the settings or reboot. To reload:
sudo sysctl -p
Note: If sysctl has trouble writing any settings then sysctl -w
or sysctl -p
will write an error to stderr. You can use this to quickly find invalid settings in your /etc/sysctl.conf
file:
sudo sysctl -p >/dev/null
!! PROCEED AT YOUR OWN RISK !!
If a bad actor has physical access to your server, they could use GRUB to gain unauthorized access to your system.
If you forget the password, you'll have to go through some work to recover the password.
man grub
man grub-mkpasswd-pbkdf2
Create a Password-Based Key Derivation Function 2 (PBKDF2) hash of your password:
grub-mkpasswd-pbkdf2 -c 100000
The below output is from using password
as the password:
Enter password: Reenter password: PBKDF2 hash of your password is grub.pbkdf2.sha512.100000.2812C233DFC899EFC3D5991D8CA74068C99D6D786A54F603E9A1EFE7BAEDDB6AA89672F92589FAF98DB9364143E7A1156C9936328971A02A483A84C3D028C4FF.C255442F9C98E1F3C500C373FE195DCF16C56EEBDC55ABDD332DD36A92865FA8FC4C90433757D743776AB186BD3AE5580F63EF445472CC1D151FA03906D08A6D
grub-mkpasswd-pbkdf2 -c 100000
Copy everything after PBKDF2 hash of your password is
, starting from and including grub.pbkdf2.sha512...
to the end. You'll need this in the next step.
The update-grub
program uses scripts to generate configuration files it will use for GRUB's settings. Create the file /etc/grub.d/01_password
and add the below code after replacing [hash]
with the hash you copied from the first step. This tells update-grub
to use this username and password for GRUB.
#!/bin/sh
set -e
cat << EOF
set superusers="grub"
password_pbkdf2 grub [hash]
EOF
For example:
#!/bin/sh set -e cat << EOF set superusers="grub" password_pbkdf2 grub grub.pbkdf2.sha512.100000.2812C233DFC899EFC3D5991D8CA74068C99D6D786A54F603E9A1EFE7BAEDDB6AA89672F92589FAF98DB9364143E7A1156C9936328971A02A483A84C3D028C4FF.C255442F9C98E1F3C500C373FE195DCF16C56EEBDC55ABDD332DD36A92865FA8FC4C90433757D743776AB186BD3AE5580F63EF445472CC1D151FA03906D08A6D EOF
Set the file's execute bit so update-grub
includes it when it updates GRUB's configuration:
sudo chmod a+x /etc/grub.d/01_password
Make a backup of GRUB's configuration file /etc/grub.d/10_linux
that we'll be modifying and unset the execute bit so update-grub
doesn't try to run it:
sudo cp --archive /etc/grub.d/10_linux /etc/grub.d/10_linux-COPY-$(date +"%Y%m%d%H%M%S")
sudo chmod a-x /etc/grub.d/10_linux.*
To make the default Debian install unrestricted (without the password) while keeping everything else restricted (with the password) modify /etc/grub.d/10_linux
and add --unrestricted
to the CLASS
variable.
sudo sed -i -r -e "/^CLASS=/ a CLASS=\"\${CLASS} --unrestricted\" # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" /etc/grub.d/10_linux
Update GRUB with update-grub
:
sudo update-grub
!! PROCEED AT YOUR OWN RISK !!
If you have sudo configured properly, then the root account will mostly never need to log in directly -- either at the terminal or remotely.
Be warned, this can cause issues with some configurations!
If your installation uses sulogin
(like Debian) to drop to a root console during boot failures, then locking the root account will prevent sulogin
from opening the root shell and you will get this error:
Cannot open access to console, the root account is locked.
See sulogin(8) man page for more details.
Press Enter to continue.
To work around this, you can use the --force
option for sulogin
. Some distributions already include this, or some other, workaround.
An alternative to locking the root acount is set a long/complicated root password and store it in a secured, non digital format. That way you have it when/if you need it.
man systemd
Lock the root account:
sudo passwd -l root
!! PROCEED AT YOUR OWN RISK !!
umask controls the default permissions of files/folders when they are created. Insecure file/folder permissions give other accounts potentially unauthorized access to your data. This may include the ability to make configuration changes.
When and if other accounts need access to a file/folder, you want to explicitly grant it using a combination of file/folder permissions and primary group.
Changing the default umask can create unexpected problems. For example, if you set umask to 0077
for root, then non-root accounts will not have access to application configuration files/folders in /etc/
which could break applications that do not run with root privileges.
In order to explain how umask works I'd have to explain how Linux file/folder permissions work. As that is a rather complicated question, I will defer you to the references below for further reading.
man umask
Make a backup of files we'll be editing:
sudo cp --archive /etc/profile /etc/profile-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/bash.bashrc /etc/bash.bashrc-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /etc/login.defs /etc/login.defs-COPY-$(date +"%Y%m%d%H%M%S")
sudo cp --archive /root/.bashrc /root/.bashrc-COPY-$(date +"%Y%m%d%H%M%S")
Set default umask for non-root accounts to 0027 by adding this line to /etc/profile
and /etc/bash.bashrc
:
umask 0027
echo -e "\numask 0027 # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/profile /etc/bash.bashrc
We also need to add this line to /etc/login.defs
:
UMASK 0027
echo -e "\nUMASK 0027 # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /etc/login.defs
Set default umask for the root account to 0077 by adding this line to /root/.bashrc
:
umask 0077
echo -e "\numask 0077 # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")" | sudo tee -a /root/.bashrc
!! PROCEED AT YOUR OWN RISK !!
As you use your system, and you install and uninstall software, you'll eventually end up with orphaned, or unused software/packages/libraries. You don't need to remove them, but if you don't need them, why keep them? When security is a priority, anything not explicitly needed is a potential security threat. You want to keep your server as trimmed and lean as possible.
On Debian based systems, you can use deborphan to find orphaned packages.
Why Not
Keep in mind, deborphan finds packages that have no package dependencies. That does not mean they are not used. You could very well have a package you use every day that has no dependencies that you wouldn't want to remove. And, if deborphan gets anything wrong, then removing critical packages may break your system.
Steps
Install deborphan.
sudo apt install deborphan
Run deborphan as root to see a list of orphaned packages:
sudo deborphan
libxapian30 libpipeline1
Assuming you want to remove all of the packages deborphan finds, you can pass it's output to apt
to remove them:
sudo apt --autoremove purge $(deborphan)
(#msmtp-alternative)
Well I will SIMPLIFY this method, to only output email using google mail account (and others). True Simple! :)
``` bash
#!/bin/bash
###### PLEASE .... EDIT IT...
USRMAIL="usernameemail"
DOMPROV="gmail.com"
PWDEMAIL="passwordStrong" ## ATTENTION DONT USE Special Chars.. like as SPACE # and some others not all. Feel free to test ;)
MAILPROV="smtp.google.com:583"
MYMAIL="$USRMAIL@$DOMPROV"
USERLOC="root"
#######
apt install -y msmtp
ln -s /usr/bin/msmtp /usr/sbin/sendmail
#wget http://www.cacert.org/revoke.crl -O /etc/ssl/certs/revoke.crl
#chmod 644 /etc/ssl/certs/revoke.crl
touch /root/.msmtprc
cat <<EOF> .msmtprc
defaults
account gmail
host $MAILPROV
port $MAILPORT
#proxy_host 127.0.0.1
#proxy_port 9001
from $MYEMAIL
timeout off
protocol smtp
#auto_from [(on|off)]
#from envelope_from
#maildomain [domain]
auth on
user $USRMAIL
passwordeval "gpg -q --for-your-eyes-only --no-tty -d /root/msmtp-mail.gpg"
#passwordeval "gpg --quiet --for-your-eyes-only --no-tty --decrypt /root/msmtp-mail.gpg"
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
#tls_crl_file /etc/ssl/certs/revoke.crl
#tls_fingerprint [fingerprint]
#tls_key_file [file]
#tls_cert_file [file]
tls_certcheck on
tls_force_sslv3 on
tls_min_dh_prime_bits 512
#tls_priorities [priorities]
#dsn_notify (off|condition)
#dsn_return (off|amount)
#domain argument
#keepbcc off
logfile /var/log/mail.log
syslog on
account default : gmail
EOF
chmod 0400 /root/.msmtprc
## In testing .. auto command
# echo -e "1\n4096\n\ny\n$MYUSRMAIL\n$MYEMAIL\nmy key\nO\n$PWDMAIL\n$PWDMAIL\n" | gpg --full-generate-key
##
gpg --full-generate-key
gpg --output revoke.asc --gen-revoke $MYEMAIL
echo -e "$PWDEMAIL\n" | gpg -e -o /root/msmtp-mail.gpg --recipient $MYEMAIL
echo "export GPG_TTY=\$(tty)" >> .baschrc
chmod 400 msmtp-mail.gpg
echo "Hello there" | msmtp --debug $MYEMAIL
echo"######################
## MSMTP Configured ##
######################"
```
DONE!! ;)
Unless you're planning on setting up your own mail server, you'll need a way to send e-mails from your server. This will be important for system alerts/messages.
You can use any Gmail account. I recommend you create one specific for this server. That way if your server is compromised, the bad-actor won't have any passwords for your primary account. Granted, if you have 2FA/MFA enabled and you use an app password, there isn't much a bad-actor can do with just the app password, but why take the risk?
There are many guides on-line that cover how to configure Gmail as MTA using STARTTLS including a previous version of this guide. With STARTTLS, an initial unencrypted connection is made and then upgraded to an encrypted TLS or SSL connection. Instead, with the approach outlined below, an encrypted TLS connection is made from the start.
Also, as discussed in issue #29 and here, exim4 will fail for messages with long lines. We'll fix this in this section too.
mail
configured to send e-mails from your server using GmailInstall exim4. You will also need openssl and ca-certificates.
On Debian based systems:
sudo apt install exim4 openssl ca-certificates
Configure exim4:
For Debian based systems:
sudo dpkg-reconfigure exim4-config
You'll be prompted with some questions:
Prompt | Answer |
---|---|
General type of mail configuration | mail sent by smarthost; no local mail |
System mail name | localhost |
IP-addresses to listen on for incoming SMTP connections | 127.0.0.1; ::1 |
Other destinations for which mail is accepted | (default) |
Visible domain name for local users | localhost |
IP address or host name of the outgoing smarthost | smtp.gmail.com::465 |
Keep number of DNS-queries minimal (Dial-on-Demand)? | No |
Split configuration into small files? | No |
Make a backup of /etc/exim4/passwd.client
:
sudo cp --archive /etc/exim4/passwd.client /etc/exim4/passwd.client-COPY-$(date +"%Y%m%d%H%M%S")
Add a line like this to /etc/exim4/passwd.client
Notes:
yourAccount@gmail.com
and yourPassword
with your details. If you have 2FA/MFA enabled on your Gmail then you'll need to create and use an app password here.host smtp.gmail.com
for the most up-to-date domains to list.smtp.gmail.com:yourAccount@gmail.com:yourPassword
*.google.com:yourAccount@gmail.com:yourPassword
This file has your Gmail password so we need to lock it down:
sudo chown root:Debian-exim /etc/exim4/passwd.client
sudo chmod 640 /etc/exim4/passwd.client
The next step is to create an TLS certificate that exim4 will use to make the encrypted connection to smtp.gmail.com
. You can use your own certificate, like one from Let's Encrypt, or create one yourself using openssl. We will use a script that comes with exim4 that calls openssl to make our certificate:
sudo bash /usr/share/doc/exim4-base/examples/exim-gencert
[*] Creating a self signed SSL certificate for Exim! This may be sufficient to establish encrypted connections but for secure identification you need to buy a real certificate! Please enter the hostname of your MTA at the Common Name (CN) prompt! Generating a RSA private key ..........................................+++++ ................................................+++++ writing new private key to '/etc/exim4/exim.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Code (2 letters) [US]:[redacted] State or Province Name (full name) []:[redacted] Locality Name (eg, city) []:[redacted] Organization Name (eg, company; recommended) []:[redacted] Organizational Unit Name (eg, section) []:[redacted] Server name (eg. ssl.domain.tld; required!!!) []:localhost Email Address []:[redacted] [*] Done generating self signed certificates for exim! Refer to the documentation and example configuration files over at /usr/share/doc/exim4-base/ for an idea on how to enable TLS support in your mail transfer agent.
Instruct exim4 to use TLS and port 465, and fix exim4's long lines issue, by creating the file /etc/exim4/exim4.conf.localmacros
and adding:
MAIN_TLS_ENABLE = 1
REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS = *
TLS_ON_CONNECT_PORTS = 465
REQUIRE_PROTOCOL = smtps
IGNORE_SMTP_LINE_LENGTH_LIMIT = true
cat << EOF | sudo tee /etc/exim4/exim4.conf.localmacros
MAIN_TLS_ENABLE = 1
REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS = *
TLS_ON_CONNECT_PORTS = 465
REQUIRE_PROTOCOL = smtps
IGNORE_SMTP_LINE_LENGTH_LIMIT = true
EOF
Make a backup of exim4's configuration file /etc/exim4/exim4.conf.template
:
sudo cp --archive /etc/exim4/exim4.conf.template /etc/exim4/exim4.conf.template-COPY-$(date +"%Y%m%d%H%M%S")
Add the below to /etc/exim4/exim4.conf.template
after the .ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS ... .endif
block:
.ifdef REQUIRE_PROTOCOL
protocol = REQUIRE_PROTOCOL
.endif
.ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS hosts_require_tls = REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS .endif .ifdef REQUIRE_PROTOCOL protocol = REQUIRE_PROTOCOL .endif .ifdef REMOTE_SMTP_HEADERS_REWRITE headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE .endif
sudo sed -i -r -e '/^.ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS$/I { :a; n; /^.endif$/!ba; a\# added by '"$(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")"'\n.ifdef REQUIRE_PROTOCOL\n protocol = REQUIRE_PROTOCOL\n.endif\n# end add' -e '}' /etc/exim4/exim4.conf.template
.ifdef REQUIRE_PROTOCOL
protocol = REQUIRE_PROTOCOL
.endif
Add the below to /etc/exim4/exim4.conf.template
inside the .ifdef MAIN_TLS_ENABLE
block:
.ifdef MAIN_TLS_ENABLE .ifdef TLS_ON_CONNECT_PORTS tls_on_connect_ports = TLS_ON_CONNECT_PORTS .endif
sudo sed -i -r -e "/\.ifdef MAIN_TLS_ENABLE/ a # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n.ifdef TLS_ON_CONNECT_PORTS\n tls_on_connect_ports = TLS_ON_CONNECT_PORTS\n.endif\n# end add" /etc/exim4/exim4.conf.template
.ifdef TLS_ON_CONNECT_PORTS
tls_on_connect_ports = TLS_ON_CONNECT_PORTS
.endif
Update exim4 configuration to use TLS and then restart the service:
sudo update-exim4.conf
sudo service exim4 restart
If you're using UFW, you'll need to allow outbound traffic on 465. To do this we'll create a custom UFW application profile and then enable it. Create the file /etc/ufw/applications.d/smtptls
, add this, then run ufw allow out smtptls comment 'open TLS port 465 for use with SMPT to send e-mails'
:
cat << EOF | sudo tee /etc/ufw/applications.d/smtptls
[SMTPTLS]
title=SMTP through TLS
description=This opens up the TLS port 465 for use with SMPT to send e-mails.
ports=465/tcp
EOF
sudo ufw allow out smtptls comment 'open TLS port 465 for use with SMPT to send e-mails'
[SMTPTLS]
title=SMTP through TLS
description=This opens up the TLS port 465 for use with SMPT to send e-mails.
ports=465/tcp
Add some mail aliases so we can send e-mails to local accounts by adding lines like this to /etc/aliases
:
You'll need to add all the local accounts that exist on your server.
user1: user1@gmail.com
user2: user2@gmail.com
...
Test your setup:
echo "test" | mail -s "Test" email@gmail.com
sudo tail /var/log/exim4/mainlog
There will come a time when you'll need to look through your iptables logs. Having all the iptables logs go to their own file will make it a lot easier to find what you're looking for.
The first step is by telling your firewall to prefix all log entries with some unique string. If you're using iptables directly, you would do something like --log-prefix "[IPTABLES] "
for all the rules. We took care of this in step step 4 of installing psad.
After you've added a prefix to the firewall logs, we need to tell rsyslog to send those lines to its own file. Do this by creating the file /etc/rsyslog.d/10-iptables.conf
and adding this:
:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop
If you're expecting a lot if data being logged by your firewall, prefix the filename with a -
"to omit syncing the file after every logging". For example:
:msg, contains, "[IPTABLES] " -/var/log/iptables.log
& stop
Note: Remember to change the prefix to whatever you use.
cat << EOF | sudo tee /etc/rsyslog.d/10-iptables.conf
:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop
EOF
:msg, contains, "[IPTABLES] " -/var/log/iptables.log
& stop
:msg, contains, "[IPTABLES] " /var/log/iptables.log
& stop
Since we're logging firewall messages to a different file, we need to tell psad where the new file is. Edit /etc/psad/psad.conf
and set IPT_SYSLOG_FILE
to the path of the log file. For example:
Note: Remember to change the prefix to whatever you use.
sudo sed -i -r -e "s/^(IPT_SYSLOG_FILE\s+)([^;]+)(;)$/# \1\2\3 # commented by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")\n\1\/var\/log\/iptables.log\3 # added by $(whoami) on $(date +"%Y-%m-%d @ %H:%M:%S")/" /etc/psad/psad.conf
IPT_SYSLOG_FILE /var/log/iptables.log;
Restart psad and rsyslog to activate the changes (or reboot):
sudo psad -R
sudo psad --sig-update
sudo psad -H
sudo service rsyslog restart
The last thing we have to do is tell logrotate to rotate the new log file so it doesn't get to big and fill up our disk. Create the file /etc/logrotate.d/iptables
and add this:
cat << EOF | sudo tee /etc/logrotate.d/iptables
/var/log/iptables.log
{
rotate 7
daily
missingok
notifempty
delaycompress
compress
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
}
EOF
/var/log/iptables.log
{
rotate 7
daily
missingok
notifempty
delaycompress
compress
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
}
For any questions, comments, concerns, feedback, or issues, submit a new issue.
Download Details:
Author: imthenachoman
Source Code: https://github.com/imthenachoman/How-To-Secure-A-Linux-Server
License: CC-BY-SA-4.0 License
#linux #security
1648869960
_ _ _____
| | | |/ ____|
| | __ _ _ __ __ ___ _____| | (___
| | / _` | '__/ _` \ \ / / _ \ |\___ \
| |___| (_| | | | (_| |\ V / __/ |____) |
|______\__,_|_| \__,_| \_/ \___|_|_____/
🚀 LaravelS is
an out-of-the-box adapter
between Swoole and Laravel/Lumen.
Please Watch
this repository to get the latest updates.
Built-in Http/WebSocket server
Memory resident
Gracefully reload
Automatically reload after modifying code
Support Laravel/Lumen both, good compatibility
Simple & Out of the box
Which is the fastest web framework?
TechEmpower Framework Benchmarks
Dependency | Requirement |
---|---|
PHP | >= 5.5.9 Recommend PHP7+ |
Swoole | >= 1.7.19 No longer support PHP5 since 2.0.12 Recommend 4.5.0+ |
Laravel/Lumen | >= 5.1 Recommend 8.0+ |
1.Require package via Composer(packagist).
composer require "hhxsv5/laravel-s:~3.7.0" -vvv
# Make sure that your composer.lock file is under the VCS
2.Register service provider(pick one of two).
Laravel
: in config/app.php
file, Laravel 5.5+ supports package discovery automatically, you should skip this step
'providers' => [
//...
Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class,
],
Lumen
: in bootstrap/app.php
file
$app->register(Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class);
3.Publish configuration and binaries.
After upgrading LaravelS, you need to republish; click here to see the change notes of each version.
php artisan laravels publish
# Configuration: config/laravels.php
# Binary: bin/laravels bin/fswatch bin/inotify
4.Change config/laravels.php
: listen_ip, listen_port, refer Settings.
5.Performance tuning
Number of Workers: LaravelS uses Swoole's Synchronous IO
mode, the larger the worker_num
setting, the better the concurrency performance, but it will cause more memory usage and process switching overhead. If one request takes 100ms, in order to provide 1000QPS concurrency, at least 100 Worker processes need to be configured. The calculation method is: worker_num = 1000QPS/(1s/1ms) = 100, so incremental pressure testing is needed to calculate the best worker_num
.
Please read the notices carefully before running
, Important notices(IMPORTANT).
php bin/laravels {start|stop|restart|reload|info|help}
.Command | Description |
---|---|
start | Start LaravelS, list the processes by "ps -ef|grep laravels" |
stop | Stop LaravelS, and trigger the method onStop of Custom process |
restart | Restart LaravelS: Stop gracefully before starting; The service is unavailable until startup is complete |
reload | Reload all Task/Worker/Timer processes which contain your business codes, and trigger the method onReload of Custom process, CANNOT reload Master/Manger processes. After modifying config/laravels.php , you only have to call restart to restart |
info | Display component version information |
help | Display help information |
start
and restart
.Option | Description |
---|---|
-d|--daemonize | Run as a daemon, this option will override the swoole.daemonize setting in laravels.php |
-e|--env | The environment the command should run under, such as --env=testing will use the configuration file .env.testing firstly, this feature requires Laravel 5.2+ |
-i|--ignore | Ignore checking PID file of Master process |
-x|--x-version | The version(branch) of the current project, stored in $_ENV/$_SERVER, access via $_ENV['X_VERSION'] $_SERVER['X_VERSION'] $request->server->get('X_VERSION') |
Runtime
files: start
will automatically execute php artisan laravels config
and generate these files, developers generally don't need to pay attention to them, it's recommended to add them to .gitignore
.File | Description |
---|---|
storage/laravels.conf | LaravelS's runtime configuration file |
storage/laravels.pid | PID file of Master process |
storage/laravels-timer-process.pid | PID file of the Timer process |
storage/laravels-custom-processes.pid | PID file of all custom processes |
It is recommended to supervise the main process through Supervisord, the premise is without option
-d
and to setswoole.daemonize
tofalse
.
[program:laravel-s-test]
directory=/var/www/laravel-s-test
command=/usr/local/bin/php bin/laravels start -i
numprocs=1
autostart=true
autorestart=true
startretries=3
user=www-data
redirect_stderr=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
Demo.
gzip on;
gzip_min_length 1024;
gzip_comp_level 2;
gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml;
gzip_vary on;
gzip_disable "msie6";
upstream swoole {
# Connect IP:Port
server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
# Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
#server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
#server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
#server 192.168.1.2:5200 backup;
keepalive 16;
}
server {
listen 80;
# Don't forget to bind the host
server_name laravels.com;
root /yourpath/laravel-s-test/public;
access_log /yourpath/log/nginx/$server_name.access.log main;
autoindex off;
index index.html index.htm;
# Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
location / {
try_files $uri @laravels;
}
# Response 404 directly when request the PHP file, to avoid exposing public/*.php
#location ~* \.php$ {
# return 404;
#}
location @laravels {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout 120s;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
# "swoole" is the upstream
proxy_pass http://swoole;
}
}
LoadModule proxy_module /yourpath/modules/mod_proxy.so
LoadModule proxy_balancer_module /yourpath/modules/mod_proxy_balancer.so
LoadModule lbmethod_byrequests_module /yourpath/modules/mod_lbmethod_byrequests.so
LoadModule proxy_http_module /yourpath/modules/mod_proxy_http.so
LoadModule slotmem_shm_module /yourpath/modules/mod_slotmem_shm.so
LoadModule rewrite_module /yourpath/modules/mod_rewrite.so
LoadModule remoteip_module /yourpath/modules/mod_remoteip.so
LoadModule deflate_module /yourpath/modules/mod_deflate.so
<IfModule deflate_module>
SetOutputFilter DEFLATE
DeflateCompressionLevel 2
AddOutputFilterByType DEFLATE text/html text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml
</IfModule>
<VirtualHost *:80>
# Don't forget to bind the host
ServerName www.laravels.com
ServerAdmin hhxsv5@sina.com
DocumentRoot /yourpath/laravel-s-test/public;
DirectoryIndex index.html index.htm
<Directory "/">
AllowOverride None
Require all granted
</Directory>
RemoteIPHeader X-Forwarded-For
ProxyRequests Off
ProxyPreserveHost On
<Proxy balancer://laravels>
BalancerMember http://192.168.1.1:5200 loadfactor=7
#BalancerMember http://192.168.1.2:5200 loadfactor=3
#BalancerMember http://192.168.1.3:5200 loadfactor=1 status=+H
ProxySet lbmethod=byrequests
</Proxy>
#ProxyPass / balancer://laravels/
#ProxyPassReverse / balancer://laravels/
# Apache handles the static resources, LaravelS handles the dynamic resource.
RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-d
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-f
RewriteRule ^/(.*)$ balancer://laravels%{REQUEST_URI} [P,L]
ErrorLog ${APACHE_LOG_DIR}/www.laravels.com.error.log
CustomLog ${APACHE_LOG_DIR}/www.laravels.com.access.log combined
</VirtualHost>
The Listening address of WebSocket Sever is the same as Http Server.
1.Create WebSocket Handler class, and implement interface WebSocketHandlerInterface
.The instant is automatically instantiated when start, you do not need to manually create it.
namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\Http\Response;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
/**
* @see https://www.swoole.co.uk/docs/modules/swoole-websocket-server
*/
class WebSocketService implements WebSocketHandlerInterface
{
// Declare constructor without parameters
public function __construct()
{
}
// public function onHandShake(Request $request, Response $response)
// {
// Custom handshake: https://www.swoole.co.uk/docs/modules/swoole-websocket-server-on-handshake
// The onOpen event will be triggered automatically after a successful handshake
// }
public function onOpen(Server $server, Request $request)
{
// Before the onOpen event is triggered, the HTTP request to establish the WebSocket has passed the Laravel route,
// so Laravel's Request, Auth information are readable, Session is readable and writable, but only in the onOpen event.
// \Log::info('New WebSocket connection', [$request->fd, request()->all(), session()->getId(), session('xxx'), session(['yyy' => time()])]);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$server->push($request->fd, 'Welcome to LaravelS');
}
public function onMessage(Server $server, Frame $frame)
{
// \Log::info('Received message', [$frame->fd, $frame->data, $frame->opcode, $frame->finish]);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$server->push($frame->fd, date('Y-m-d H:i:s'));
}
public function onClose(Server $server, $fd, $reactorId)
{
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
2.Modify config/laravels.php
.
// ...
'websocket' => [
'enable' => true, // Note: set enable to true
'handler' => \App\Services\WebSocketService::class,
],
'swoole' => [
//...
// Must set dispatch_mode in (2, 4, 5), see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
'dispatch_mode' => 2,
//...
],
// ...
3.Use SwooleTable
to bind FD & UserId, optional, Swoole Table Demo. Also you can use the other global storage services, like Redis/Memcached/MySQL, but be careful that FD will be possible conflicting between multiple Swoole Servers
.
4.Cooperate with Nginx (Recommended)
Refer WebSocket Proxy
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream swoole {
# Connect IP:Port
server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
# Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
#server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
#server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
#server 192.168.1.2:5200 backup;
keepalive 16;
}
server {
listen 80;
# Don't forget to bind the host
server_name laravels.com;
root /yourpath/laravel-s-test/public;
access_log /yourpath/log/nginx/$server_name.access.log main;
autoindex off;
index index.html index.htm;
# Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
location / {
try_files $uri @laravels;
}
# Response 404 directly when request the PHP file, to avoid exposing public/*.php
#location ~* \.php$ {
# return 404;
#}
# Http and WebSocket are concomitant, Nginx identifies them by "location"
# !!! The location of WebSocket is "/ws"
# Javascript: var ws = new WebSocket("ws://laravels.com/ws");
location =/ws {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout: Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds; At the same time, this close behavior is also affected by heartbeat setting of Swoole.
# proxy_read_timeout 60s;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://swoole;
}
location @laravels {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout 60s;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
proxy_pass http://swoole;
}
}
5.Heartbeat setting
Heartbeat setting of Swoole
// config/laravels.php
'swoole' => [
//...
// All connections are traversed every 60 seconds. If a connection does not send any data to the server within 600 seconds, the connection will be forced to close.
'heartbeat_idle_time' => 600,
'heartbeat_check_interval' => 60,
//...
],
Proxy read timeout of Nginx
# Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds
proxy_read_timeout 60s;
6.Push data in controller
namespace App\Http\Controllers;
class TestController extends Controller
{
public function push()
{
$fd = 1; // Find fd by userId from a map [userId=>fd].
/**@var \Swoole\WebSocket\Server $swoole */
$swoole = app('swoole');
$success = $swoole->push($fd, 'Push data to fd#1 in Controller');
var_dump($success);
}
}
Usually, you can reset/destroy some
global/static
variables, or change the currentRequest/Response
object.
laravels.received_request
After LaravelS parsed Swoole\Http\Request
to Illuminate\Http\Request
, before Laravel's Kernel handles this request.
// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen().
$events->listen('laravels.received_request', function (\Illuminate\Http\Request $req, $app) {
$req->query->set('get_key', 'hhxsv5');// Change query of request
$req->request->set('post_key', 'hhxsv5'); // Change post of request
});
laravels.generated_response
After Laravel's Kernel handled the request, before LaravelS parses Illuminate\Http\Response
to Swoole\Http\Response
.
// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen().
$events->listen('laravels.generated_response', function (\Illuminate\Http\Request $req, \Symfony\Component\HttpFoundation\Response $rsp, $app) {
$rsp->headers->set('header-key', 'hhxsv5');// Change header of response
});
This feature depends on
AsyncTask
ofSwoole
, your need to setswoole.task_worker_num
inconfig/laravels.php
firstly. The performance of asynchronous event processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.
1.Create event class.
use Hhxsv5\LaravelS\Swoole\Task\Event;
class TestEvent extends Event
{
protected $listeners = [
// Listener list
TestListener1::class,
// TestListener2::class,
];
private $data;
public function __construct($data)
{
$this->data = $data;
}
public function getData()
{
return $this->data;
}
}
2.Create listener class.
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Task\Listener;
class TestListener1 extends Listener
{
/**
* @var TestEvent
*/
protected $event;
public function handle()
{
\Log::info(__CLASS__ . ':handle start', [$this->event->getData()]);
sleep(2);// Simulate the slow codes
// Deliver task in CronJob, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
3.Fire event.
// Create instance of event and fire it, "fire" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Event;
$event = new TestEvent('event data');
// $event->delay(10); // Delay 10 seconds to fire event
// $event->setTries(3); // When an error occurs, try 3 times in total
$success = Event::fire($event);
var_dump($success);// Return true if sucess, otherwise false
This feature depends on
AsyncTask
ofSwoole
, your need to setswoole.task_worker_num
inconfig/laravels.php
firstly. The performance of task processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.
1.Create task class.
use Hhxsv5\LaravelS\Swoole\Task\Task;
class TestTask extends Task
{
private $data;
private $result;
public function __construct($data)
{
$this->data = $data;
}
// The logic of task handling, run in task process, CAN NOT deliver task
public function handle()
{
\Log::info(__CLASS__ . ':handle start', [$this->data]);
sleep(2);// Simulate the slow codes
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$this->result = 'the result of ' . $this->data;
}
// Optional, finish event, the logic of after task handling, run in worker process, CAN deliver task
public function finish()
{
\Log::info(__CLASS__ . ':finish start', [$this->result]);
Task::deliver(new TestTask2('task2 data')); // Deliver the other task
}
}
2.Deliver task.
// Create instance of TestTask and deliver it, "deliver" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Task;
$task = new TestTask('task data');
// $task->delay(3);// delay 3 seconds to deliver task
// $task->setTries(3); // When an error occurs, try 3 times in total
$ret = Task::deliver($task);
var_dump($ret);// Return true if sucess, otherwise false
Wrapper cron job base on Swoole's Millisecond Timer, replace
Linux
Crontab
.
1.Create cron job class.
namespace App\Jobs\Timer;
use App\Tasks\TestTask;
use Swoole\Coroutine;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Timer\CronJob;
class TestCronJob extends CronJob
{
protected $i = 0;
// !!! The `interval` and `isImmediate` of cron job can be configured in two ways(pick one of two): one is to overload the corresponding method, and the other is to pass parameters when registering cron job.
// --- Override the corresponding method to return the configuration: begin
public function interval()
{
return 1000;// Run every 1000ms
}
public function isImmediate()
{
return false;// Whether to trigger `run` immediately after setting up
}
// --- Override the corresponding method to return the configuration: end
public function run()
{
\Log::info(__METHOD__, ['start', $this->i, microtime(true)]);
// do something
// sleep(1); // Swoole < 2.1
Coroutine::sleep(1); // Swoole>=2.1 Coroutine will be automatically created for run().
$this->i++;
\Log::info(__METHOD__, ['end', $this->i, microtime(true)]);
if ($this->i >= 10) { // Run 10 times only
\Log::info(__METHOD__, ['stop', $this->i, microtime(true)]);
$this->stop(); // Stop this cron job, but it will run again after restart/reload.
// Deliver task in CronJob, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
}
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
2.Register cron job.
// Register cron jobs in file "config/laravels.php"
[
// ...
'timer' => [
'enable' => true, // Enable Timer
'jobs' => [ // The list of cron job
// Enable LaravelScheduleJob to run `php artisan schedule:run` every 1 minute, replace Linux Crontab
// \Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob::class,
// Two ways to configure parameters:
// [\App\Jobs\Timer\TestCronJob::class, [1000, true]], // Pass in parameters when registering
\App\Jobs\Timer\TestCronJob::class, // Override the corresponding method to return the configuration
],
'max_wait_time' => 5, // Max waiting time of reloading
// Enable the global lock to ensure that only one instance starts the timer when deploying multiple instances. This feature depends on Redis, please see https://laravel.com/docs/7.x/redis
'global_lock' => false,
'global_lock_key' => config('app.name', 'Laravel'),
],
// ...
];
3.Note: it will launch multiple timers when build the server cluster, so you need to make sure that launch one timer only to avoid running repetitive task.
4.LaravelS v3.4.0
starts to support the hot restart [Reload] Timer
process. After LaravelS receives the SIGUSR1
signal, it waits for max_wait_time
(default 5) seconds to end the process, then the Manager
process will pull up the Timer
process again.
5.If you only need to use minute-level
scheduled tasks, it is recommended to enable Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob
instead of Linux Crontab, so that you can follow the coding habits of Laravel task scheduling and configure Kernel
.
// app/Console/Kernel.php
protected function schedule(Schedule $schedule)
{
// runInBackground() will start a new child process to execute the task. This is asynchronous and will not affect the execution timing of other tasks.
$schedule->command(TestCommand::class)->runInBackground()->everyMinute();
}
Via inotify
, support Linux only.
1.Install inotify extension.
2.Turn on the switch in Settings.
3.Notice: Modify the file only in Linux
to receive the file change events. It's recommended to use the latest Docker. Vagrant Solution.
Via fswatch
, support OS X/Linux/Windows.
1.Install fswatch.
2.Run command in your project root directory.
# Watch current directory
./bin/fswatch
# Watch app directory
./bin/fswatch ./app
Via inotifywait
, support Linux.
1.Install inotify-tools.
2.Run command in your project root directory.
# Watch current directory
./bin/inotify
# Watch app directory
./bin/inotify ./app
When the above methods does not work, the ultimate solution: set max_request=1,worker_num=1
, so that Worker
process will restart after processing a request. The performance of this method is very poor, so only development environment use
.
SwooleServer
in your project/**
* $swoole is the instance of `Swoole\WebSocket\Server` if enable WebSocket server, otherwise `Swoole\Http\Server`
* @var \Swoole\WebSocket\Server|\Swoole\Http\Server $swoole
*/
$swoole = app('swoole');
var_dump($swoole->stats());
$swoole->push($fd, 'Push WebSocket message');
SwooleTable
1.Define Table, support multiple.
All defined tables will be created before Swoole starting.
// in file "config/laravels.php"
[
// ...
'swoole_tables' => [
// Scene:bind UserId & FD in WebSocket
'ws' => [// The Key is table name, will add suffix "Table" to avoid naming conflicts. Here defined a table named "wsTable"
'size' => 102400,// The max size
'column' => [// Define the columns
['name' => 'value', 'type' => \Swoole\Table::TYPE_INT, 'size' => 8],
],
],
//...Define the other tables
],
// ...
];
2.Access Table
: all table instances will be bound on SwooleServer
, access by app('swoole')->xxxTable
.
namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
class WebSocketService implements WebSocketHandlerInterface
{
/**@var \Swoole\Table $wsTable */
private $wsTable;
public function __construct()
{
$this->wsTable = app('swoole')->wsTable;
}
// Scene:bind UserId & FD in WebSocket
public function onOpen(Server $server, Request $request)
{
// var_dump(app('swoole') === $server);// The same instance
/**
* Get the currently logged in user
* This feature requires that the path to establish a WebSocket connection go through middleware such as Authenticate.
* E.g:
* Browser side: var ws = new WebSocket("ws://127.0.0.1:5200/ws");
* Then the /ws route in Laravel needs to add the middleware like Authenticate.
* Route::get('/ws', function () {
* // Respond any content with status code 200
* return 'websocket';
* })->middleware(['auth']);
*/
// $user = Auth::user();
// $userId = $user ? $user->id : 0; // 0 means a guest user who is not logged in
$userId = mt_rand(1000, 10000);
// if (!$userId) {
// // Disconnect the connections of unlogged users
// $server->disconnect($request->fd);
// return;
// }
$this->wsTable->set('uid:' . $userId, ['value' => $request->fd]);// Bind map uid to fd
$this->wsTable->set('fd:' . $request->fd, ['value' => $userId]);// Bind map fd to uid
$server->push($request->fd, "Welcome to LaravelS #{$request->fd}");
}
public function onMessage(Server $server, Frame $frame)
{
// Broadcast
foreach ($this->wsTable as $key => $row) {
if (strpos($key, 'uid:') === 0 && $server->isEstablished($row['value'])) {
$content = sprintf('Broadcast: new message "%s" from #%d', $frame->data, $frame->fd);
$server->push($row['value'], $content);
}
}
}
public function onClose(Server $server, $fd, $reactorId)
{
$uid = $this->wsTable->get('fd:' . $fd);
if ($uid !== false) {
$this->wsTable->del('uid:' . $uid['value']); // Unbind uid map
}
$this->wsTable->del('fd:' . $fd);// Unbind fd map
$server->push($fd, "Goodbye #{$fd}");
}
}
For more information, please refer to Swoole Server AddListener
To make our main server support more protocols not just Http and WebSocket, we bring the feature multi-port mixed protocol
of Swoole in LaravelS and name it Socket
. Now, you can build TCP/UDP
applications easily on top of Laravel.
Create Socket
handler class, and extend Hhxsv5\LaravelS\Swoole\Socket\{TcpSocket|UdpSocket|Http|WebSocket}
.
namespace App\Sockets;
use Hhxsv5\LaravelS\Swoole\Socket\TcpSocket;
use Swoole\Server;
class TestTcpSocket extends TcpSocket
{
public function onConnect(Server $server, $fd, $reactorId)
{
\Log::info('New TCP connection', [$fd]);
$server->send($fd, 'Welcome to LaravelS.');
}
public function onReceive(Server $server, $fd, $reactorId, $data)
{
\Log::info('Received data', [$fd, $data]);
$server->send($fd, 'LaravelS: ' . $data);
if ($data === "quit\r\n") {
$server->send($fd, 'LaravelS: bye' . PHP_EOL);
$server->close($fd);
}
}
public function onClose(Server $server, $fd, $reactorId)
{
\Log::info('Close TCP connection', [$fd]);
$server->send($fd, 'Goodbye');
}
}
These Socket
connections share the same worker processes with your HTTP
/WebSocket
connections. So it won't be a problem at all if you want to deliver tasks, use SwooleTable
, even Laravel components such as DB, Eloquent and so on. At the same time, you can access Swoole\Server\Port
object directly by member property swoolePort
.
public function onReceive(Server $server, $fd, $reactorId, $data)
{
$port = $this->swoolePort; // Get the `Swoole\Server\Port` object
}
namespace App\Http\Controllers;
class TestController extends Controller
{
public function test()
{
/**@var \Swoole\Http\Server|\Swoole\WebSocket\Server $swoole */
$swoole = app('swoole');
// $swoole->ports: Traverse all Port objects, https://www.swoole.co.uk/docs/modules/swoole-server/multiple-ports
$port = $swoole->ports[0]; // Get the `Swoole\Server\Port` object, $port[0] is the port of the main server
foreach ($port->connections as $fd) { // Traverse all connections
// $swoole->send($fd, 'Send tcp message');
// if($swoole->isEstablished($fd)) {
// $swoole->push($fd, 'Send websocket message');
// }
}
}
}
Register Sockets.
// Edit `config/laravels.php`
//...
'sockets' => [
[
'host' => '127.0.0.1',
'port' => 5291,
'type' => SWOOLE_SOCK_TCP,// Socket type: SWOOLE_SOCK_TCP/SWOOLE_SOCK_TCP6/SWOOLE_SOCK_UDP/SWOOLE_SOCK_UDP6/SWOOLE_UNIX_DGRAM/SWOOLE_UNIX_STREAM
'settings' => [// Swoole settings:https://www.swoole.co.uk/docs/modules/swoole-server-methods#swoole_server-addlistener
'open_eof_check' => true,
'package_eof' => "\r\n",
],
'handler' => \App\Sockets\TestTcpSocket::class,
'enable' => true, // whether to enable, default true
],
],
About the heartbeat configuration, it can only be set on the main server
and cannot be configured on Socket
, but the Socket
inherits the heartbeat configuration of the main server
.
For TCP socket, onConnect
and onClose
events will be blocked when dispatch_mode
of Swoole is 1/3
, so if you want to unblock these two events please set dispatch_mode
to 2/4/5
.
'swoole' => [
//...
'dispatch_mode' => 2,
//...
];
Test.
TCP: telnet 127.0.0.1 5291
UDP: [Linux] echo "Hello LaravelS" > /dev/udp/127.0.0.1/5292
Register example of other protocols.
turn on WebSocket
, that is, set websocket.enable
to true
.Warning: The order of code execution in the coroutine is out of order. The data of the request level should be isolated by the coroutine ID. However, there are many singleton and static attributes in Laravel/Lumen, the data between different requests will affect each other, it's Unsafe
. For example, the database connection is a singleton, the same database connection shares the same PDO resource. This is fine in the synchronous blocking mode, but it does not work in the asynchronous coroutine mode. Each query needs to create different connections and maintain IO state of different connections, which requires a connection pool.
DO NOT
enable the coroutine, only the custom process can use the coroutine.
Support developers to create special work processes for monitoring, reporting, or other special tasks. Refer addProcess.
Create Proccess class, implements CustomProcessInterface.
namespace App\Processes;
use App\Tasks\TestTask;
use Hhxsv5\LaravelS\Swoole\Process\CustomProcessInterface;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Swoole\Coroutine;
use Swoole\Http\Server;
use Swoole\Process;
class TestProcess implements CustomProcessInterface
{
/**
* @var bool Quit tag for Reload updates
*/
private static $quit = false;
public static function callback(Server $swoole, Process $process)
{
// The callback method cannot exit. Once exited, Manager process will automatically create the process
while (!self::$quit) {
\Log::info('Test process: running');
// sleep(1); // Swoole < 2.1
Coroutine::sleep(1); // Swoole>=2.1: Coroutine & Runtime will be automatically enabled for callback().
// Deliver task in custom process, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
// The upper layer will catch the exception thrown in the callback and record it in the Swoole log, and then this process will exit. The Manager process will re-create the process after 3 seconds, so developers need to try/catch to catch the exception by themselves to avoid frequent process creation.
// throw new \Exception('an exception');
}
}
// Requirements: LaravelS >= v3.4.0 & callback() must be async non-blocking program.
public static function onReload(Server $swoole, Process $process)
{
// Stop the process...
// Then end process
\Log::info('Test process: reloading');
self::$quit = true;
// $process->exit(0); // Force exit process
}
// Requirements: LaravelS >= v3.7.4 & callback() must be async non-blocking program.
public static function onStop(Server $swoole, Process $process)
{
// Stop the process...
// Then end process
\Log::info('Test process: stopping');
self::$quit = true;
// $process->exit(0); // Force exit process
}
}
Register TestProcess.
// Edit `config/laravels.php`
// ...
'processes' => [
'test' => [ // Key name is process name
'class' => \App\Processes\TestProcess::class,
'redirect' => false, // Whether redirect stdin/stdout, true or false
'pipe' => 0, // The type of pipeline, 0: no pipeline 1: SOCK_STREAM 2: SOCK_DGRAM
'enable' => true, // Whether to enable, default true
//'num' => 3 // To create multiple processes of this class, default is 1
//'queue' => [ // Enable message queue as inter-process communication, configure empty array means use default parameters
// 'msg_key' => 0, // The key of the message queue. Default: ftok(__FILE__, 1).
// 'mode' => 2, // Communication mode, default is 2, which means contention mode
// 'capacity' => 8192, // The length of a single message, is limited by the operating system kernel parameters. The default is 8192, and the maximum is 65536
//],
//'restart_interval' => 5, // After the process exits abnormally, how many seconds to wait before restarting the process, default 5 seconds
],
],
Note: The callback() cannot quit. If quit, the Manager process will re-create the process.
Example: Write data to a custom process.
// config/laravels.php
'processes' => [
'test' => [
'class' => \App\Processes\TestProcess::class,
'redirect' => false,
'pipe' => 1,
],
],
// app/Processes/TestProcess.php
public static function callback(Server $swoole, Process $process)
{
while ($data = $process->read()) {
\Log::info('TestProcess: read data', [$data]);
$process->write('TestProcess: ' . $data);
}
}
// app/Http/Controllers/TestController.php
public function testProcessWrite()
{
/**@var \Swoole\Process $process */
$process = app('swoole')->customProcesses['test'];
$process->write('TestController: write data' . time());
var_dump($process->read());
}
LaravelS
will pull theApollo
configuration and write it to the.env
file when starting. At the same time,LaravelS
will start the custom processapollo
to monitor the configuration and automaticallyreload
when the configuration changes.
Enable Apollo: add --enable-apollo
and Apollo parameters to the startup parameters.
php bin/laravels start --enable-apollo --apollo-server=http://127.0.0.1:8080 --apollo-app-id=LARAVEL-S-TEST
Support hot updates(optional).
// Edit `config/laravels.php`
'processes' => Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),
// When there are other custom process configurations
'processes' => [
'test' => [
'class' => \App\Processes\TestProcess::class,
'redirect' => false,
'pipe' => 1,
],
// ...
] + Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),
List of available parameters.
Parameter | Description | Default | Demo |
---|---|---|---|
apollo-server | Apollo server URL | - | --apollo-server=http://127.0.0.1:8080 |
apollo-app-id | Apollo APP ID | - | --apollo-app-id=LARAVEL-S-TEST |
apollo-namespaces | The namespace to which the APP belongs, support specify the multiple | application | --apollo-namespaces=application --apollo-namespaces=env |
apollo-cluster | The cluster to which the APP belongs | default | --apollo-cluster=default |
apollo-client-ip | IP of current instance, can also be used for grayscale publishing | Local intranet IP | --apollo-client-ip=10.2.1.83 |
apollo-pull-timeout | Timeout time(seconds) when pulling configuration | 5 | --apollo-pull-timeout=5 |
apollo-backup-old-env | Whether to backup the old configuration file when updating the configuration file .env | false | --apollo-backup-old-env |
Support Prometheus monitoring and alarm, Grafana visually view monitoring metrics. Please refer to Docker Compose for the environment construction of Prometheus and Grafana.
Require extension APCu >= 5.0.0, please install it by pecl install apcu
.
Copy the configuration file prometheus.php
to the config
directory of your project. Modify the configuration as appropriate.
# Execute commands in the project root directory
cp vendor/hhxsv5/laravel-s/config/prometheus.php config/
If your project is Lumen
, you also need to manually load the configuration $app->configure('prometheus');
in bootstrap/app.php
.
Configure global
middleware: Hhxsv5\LaravelS\Components\Prometheus\RequestMiddleware::class
. In order to count the request time consumption as accurately as possible, RequestMiddleware
must be the first
global middleware, which needs to be placed in front of other middleware.
Register ServiceProvider: Hhxsv5\LaravelS\Components\Prometheus\ServiceProvider::class
.
Configure the CollectorProcess in config/laravels.php
to collect the metrics of Swoole Worker/Task/Timer processes regularly.
'processes' => Hhxsv5\LaravelS\Components\Prometheus\CollectorProcess::getDefinition(),
Create the route to output metrics.
use Hhxsv5\LaravelS\Components\Prometheus\Exporter;
Route::get('/actuator/prometheus', function () {
$result = app(Exporter::class)->render();
return response($result, 200, ['Content-Type' => Exporter::REDNER_MIME_TYPE]);
});
Complete the configuration of Prometheus and start it.
global:
scrape_interval: 5s
scrape_timeout: 5s
evaluation_interval: 30s
scrape_configs:
- job_name: laravel-s-test
honor_timestamps: true
metrics_path: /actuator/prometheus
scheme: http
follow_redirects: true
static_configs:
- targets:
- 127.0.0.1:5200 # The ip and port of the monitored service
# Dynamically discovered using one of the supported service-discovery mechanisms
# https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
# - job_name: laravels-eureka
# honor_timestamps: true
# scrape_interval: 5s
# metrics_path: /actuator/prometheus
# scheme: http
# follow_redirects: true
# eureka_sd_configs:
# - server: http://127.0.0.1:8080/eureka
# follow_redirects: true
# refresh_interval: 5s
Start Grafana, then import panel json.
Supported events:
Event | Interface | When happened |
---|---|---|
ServerStart | Hhxsv5\LaravelS\Swoole\Events\ServerStartInterface | Occurs when the Master process is starting, this event should not handle complex business logic, and can only do some simple work of initialization . |
ServerStop | Hhxsv5\LaravelS\Swoole\Events\ServerStopInterface | Occurs when the server exits normally, CANNOT use async or coroutine related APIs in this event . |
WorkerStart | Hhxsv5\LaravelS\Swoole\Events\WorkerStartInterface | Occurs after the Worker/Task process is started, and the Laravel initialization has been completed. |
WorkerStop | Hhxsv5\LaravelS\Swoole\Events\WorkerStopInterface | Occurs after the Worker/Task process exits normally |
WorkerError | Hhxsv5\LaravelS\Swoole\Events\WorkerErrorInterface | Occurs when an exception or fatal error occurs in the Worker/Task process |
1.Create an event class to implement the corresponding interface.
namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\ServerStartInterface;
use Swoole\Atomic;
use Swoole\Http\Server;
class ServerStartEvent implements ServerStartInterface
{
public function __construct()
{
}
public function handle(Server $server)
{
// Initialize a global counter (available across processes)
$server->atomicCount = new Atomic(2233);
// Invoked in controller: app('swoole')->atomicCount->get();
}
}
namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\WorkerStartInterface;
use Swoole\Http\Server;
class WorkerStartEvent implements WorkerStartInterface
{
public function __construct()
{
}
public function handle(Server $server, $workerId)
{
// Initialize a database connection pool
// DatabaseConnectionPool::init();
}
}
2.Configuration.
// Edit `config/laravels.php`
'event_handlers' => [
'ServerStart' => [\App\Events\ServerStartEvent::class], // Trigger events in array order
'WorkerStart' => [\App\Events\WorkerStartEvent::class],
],
1.Modify bootstrap/app.php
and set the storage directory. Because the project directory is read-only, the /tmp
directory can only be read and written.
$app->useStoragePath(env('APP_STORAGE_PATH', '/tmp/storage'));
2.Create a shell script laravels_bootstrap
and grant executable permission
.
#!/usr/bin/env bash
set +e
# Create storage-related directories
mkdir -p /tmp/storage/app/public
mkdir -p /tmp/storage/framework/cache
mkdir -p /tmp/storage/framework/sessions
mkdir -p /tmp/storage/framework/testing
mkdir -p /tmp/storage/framework/views
mkdir -p /tmp/storage/logs
# Set the environment variable APP_STORAGE_PATH, please make sure it's the same as APP_STORAGE_PATH in .env
export APP_STORAGE_PATH=/tmp/storage
# Start LaravelS
php bin/laravels start
3.Configure template.xml
.
ROSTemplateFormatVersion: '2015-09-01'
Transform: 'Aliyun::Serverless-2018-04-03'
Resources:
laravel-s-demo:
Type: 'Aliyun::Serverless::Service'
Properties:
Description: 'LaravelS Demo for Serverless'
fc-laravel-s:
Type: 'Aliyun::Serverless::Function'
Properties:
Handler: laravels.handler
Runtime: custom
MemorySize: 512
Timeout: 30
CodeUri: ./
InstanceConcurrency: 10
EnvironmentVariables:
BOOTSTRAP_FILE: laravels_bootstrap
Under FPM mode, singleton instances will be instantiated and recycled in every request, request start=>instantiate instance=>request end=>recycled instance.
Under Swoole Server, All singleton instances will be held in memory, different lifetime from FPM, request start=>instantiate instance=>request end=>do not recycle singleton instance. So need developer to maintain status of singleton instances in every request.
Common solutions:
Write a XxxCleaner
class to clean up the singleton object state. This class implements the interface Hhxsv5\LaravelS\Illuminate\Cleaners\CleanerInterface
and then registers it in cleaners
of laravels.php
.
Reset
status of singleton instances by Middleware
.
Re-register ServiceProvider
, add XxxServiceProvider
into register_providers
of file laravels.php
. So that reinitialize singleton instances in every request Refer.
Known issues: a package of known issues and solutions.
Logging; if you want to output to the console, you can use stderr
, Log::channel('stderr')->debug('debug message').
Laravel Dump Server(Laravel 5.7 has been integrated by default).
Read request by Illuminate\Http\Request
Object, $_ENV is readable, $_SERVER is partially readable, CANNOT USE
$_GET/$_POST/$_FILES/$_COOKIE/$_REQUEST/$_SESSION/$GLOBALS.
public function form(\Illuminate\Http\Request $request)
{
$name = $request->input('name');
$all = $request->all();
$sessionId = $request->cookie('sessionId');
$photo = $request->file('photo');
// Call getContent() to get the raw POST body, instead of file_get_contents('php://input')
$rawContent = $request->getContent();
//...
}
Respond by Illuminate\Http\Response
Object, compatible with echo/vardump()/print_r(),CANNOT USE
functions dd()/exit()/die()/header()/setcookie()/http_response_code().
public function json()
{
return response()->json(['time' => time()])->header('header1', 'value1')->withCookie('c1', 'v1');
}
Singleton connection
will be resident in memory, it is recommended to turn on persistent connection
for better performance.
Database connection, it will
reconnect automatically immediately
after disconnect.
// config/database.php
'connections' => [
'my_conn' => [
'driver' => 'mysql',
'host' => env('DB_MY_CONN_HOST', 'localhost'),
'port' => env('DB_MY_CONN_PORT', 3306),
'database' => env('DB_MY_CONN_DATABASE', 'forge'),
'username' => env('DB_MY_CONN_USERNAME', 'forge'),
'password' => env('DB_MY_CONN_PASSWORD', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'strict' => false,
'options' => [
// Enable persistent connection
\PDO::ATTR_PERSISTENT => true,
],
],
],
Redis connection, it won't
reconnect automatically immediately
after disconnect, and will throw an exception about lost connection, reconnect next time. You need to make sure that SELECT DB
correctly before operating Redis every time.
// config/database.php
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'), // It is recommended to use phpredis for better performance.
'default' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
'persistent' => true, // Enable persistent connection
],
],
Avoid using global variables. If necessary, please clean or reset them manually.
Infinitely appending element into static
/global
variable will lead to OOM(Out of Memory).
class Test
{
public static $array = [];
public static $string = '';
}
// Controller
public function test(Request $req)
{
// Out of Memory
Test::$array[] = $req->input('param1');
Test::$string .= $req->input('param2');
}
Memory leak detection method
Modify config/laravels.php
: worker_num=1, max_request=1000000
, remember to change it back after test;
Add routing /debug-memory-leak
without route middleware
to observe the memory changes of the Worker
process;
Start LaravelS
and request /debug-memory-leak
until diff_mem
is less than or equal to zero; if diff_mem
is always greater than zero, it means that there may be a memory leak in Global Middleware
or Laravel Framework
;
After completing Step 3
, alternately
request the business routes and /debug-memory-leak
(It is recommended to use ab
/wrk
to make a large number of requests for business routes), the initial increase in memory is normal. After a large number of requests for the business routes, if diff_mem
is always greater than zero and curr_mem
continues to increase, there is a high probability of memory leak; If curr_mem
always changes within a certain range and does not continue to increase, there is a low probability of memory leak.
If you still can't solve it, max_request is the last guarantee.
Author: hhxsv5
Source Code: https://github.com/hhxsv5/laravel-s
License: MIT License
1649214000
🚀 LaravelS is
an out-of-the-box adapter
between Swoole and Laravel/Lumen.
Please Watch
this repository to get the latest updates.
Table of Contents
Built-in Http/WebSocket server
Memory resident
Gracefully reload
Automatically reload after modifying code
Support Laravel/Lumen both, good compatibility
Simple & Out of the box
Which is the fastest web framework?
TechEmpower Framework Benchmarks
Dependency | Requirement |
---|---|
PHP | >= 5.5.9 Recommend PHP7+ |
Swoole | >= 1.7.19 No longer support PHP5 since 2.0.12 Recommend 4.5.0+ |
Laravel/Lumen | >= 5.1 Recommend 8.0+ |
1.Require package via Composer(packagist).
composer require "hhxsv5/laravel-s:~3.7.0" -vvv
# Make sure that your composer.lock file is under the VCS
2.Register service provider(pick one of two).
Laravel
: in config/app.php
file, Laravel 5.5+ supports package discovery automatically, you should skip this step
'providers' => [
//...
Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class,
],
Lumen
: in bootstrap/app.php
file
$app->register(Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class);
3.Publish configuration and binaries.
After upgrading LaravelS, you need to republish; click here to see the change notes of each version.
php artisan laravels publish
# Configuration: config/laravels.php
# Binary: bin/laravels bin/fswatch bin/inotify
4.Change config/laravels.php
: listen_ip, listen_port, refer Settings.
5.Performance tuning
Number of Workers: LaravelS uses Swoole's Synchronous IO
mode, the larger the worker_num
setting, the better the concurrency performance, but it will cause more memory usage and process switching overhead. If one request takes 100ms, in order to provide 1000QPS concurrency, at least 100 Worker processes need to be configured. The calculation method is: worker_num = 1000QPS/(1s/1ms) = 100, so incremental pressure testing is needed to calculate the best worker_num
.
Please read the notices carefully before running
, Important notices(IMPORTANT).
php bin/laravels {start|stop|restart|reload|info|help}
.Command | Description |
---|---|
start | Start LaravelS, list the processes by "ps -ef|grep laravels" |
stop | Stop LaravelS, and trigger the method onStop of Custom process |
restart | Restart LaravelS: Stop gracefully before starting; The service is unavailable until startup is complete |
reload | Reload all Task/Worker/Timer processes which contain your business codes, and trigger the method onReload of Custom process, CANNOT reload Master/Manger processes. After modifying config/laravels.php , you only have to call restart to restart |
info | Display component version information |
help | Display help information |
start
and restart
.Option | Description |
---|---|
-d|--daemonize | Run as a daemon, this option will override the swoole.daemonize setting in laravels.php |
-e|--env | The environment the command should run under, such as --env=testing will use the configuration file .env.testing firstly, this feature requires Laravel 5.2+ |
-i|--ignore | Ignore checking PID file of Master process |
-x|--x-version | The version(branch) of the current project, stored in $_ENV/$_SERVER, access via $_ENV['X_VERSION'] $_SERVER['X_VERSION'] $request->server->get('X_VERSION') |
Runtime
files: start
will automatically execute php artisan laravels config
and generate these files, developers generally don't need to pay attention to them, it's recommended to add them to .gitignore
.File | Description |
---|---|
storage/laravels.conf | LaravelS's runtime configuration file |
storage/laravels.pid | PID file of Master process |
storage/laravels-timer-process.pid | PID file of the Timer process |
storage/laravels-custom-processes.pid | PID file of all custom processes |
It is recommended to supervise the main process through Supervisord, the premise is without option
-d
and to setswoole.daemonize
tofalse
.
[program:laravel-s-test]
directory=/var/www/laravel-s-test
command=/usr/local/bin/php bin/laravels start -i
numprocs=1
autostart=true
autorestart=true
startretries=3
user=www-data
redirect_stderr=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
Demo.
gzip on;
gzip_min_length 1024;
gzip_comp_level 2;
gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml;
gzip_vary on;
gzip_disable "msie6";
upstream swoole {
# Connect IP:Port
server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
# Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
#server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
#server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
#server 192.168.1.2:5200 backup;
keepalive 16;
}
server {
listen 80;
# Don't forget to bind the host
server_name laravels.com;
root /yourpath/laravel-s-test/public;
access_log /yourpath/log/nginx/$server_name.access.log main;
autoindex off;
index index.html index.htm;
# Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
location / {
try_files $uri @laravels;
}
# Response 404 directly when request the PHP file, to avoid exposing public/*.php
#location ~* \.php$ {
# return 404;
#}
location @laravels {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout 120s;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
# "swoole" is the upstream
proxy_pass http://swoole;
}
}
LoadModule proxy_module /yourpath/modules/mod_proxy.so
LoadModule proxy_balancer_module /yourpath/modules/mod_proxy_balancer.so
LoadModule lbmethod_byrequests_module /yourpath/modules/mod_lbmethod_byrequests.so
LoadModule proxy_http_module /yourpath/modules/mod_proxy_http.so
LoadModule slotmem_shm_module /yourpath/modules/mod_slotmem_shm.so
LoadModule rewrite_module /yourpath/modules/mod_rewrite.so
LoadModule remoteip_module /yourpath/modules/mod_remoteip.so
LoadModule deflate_module /yourpath/modules/mod_deflate.so
<IfModule deflate_module>
SetOutputFilter DEFLATE
DeflateCompressionLevel 2
AddOutputFilterByType DEFLATE text/html text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml
</IfModule>
<VirtualHost *:80>
# Don't forget to bind the host
ServerName www.laravels.com
ServerAdmin hhxsv5@sina.com
DocumentRoot /yourpath/laravel-s-test/public;
DirectoryIndex index.html index.htm
<Directory "/">
AllowOverride None
Require all granted
</Directory>
RemoteIPHeader X-Forwarded-For
ProxyRequests Off
ProxyPreserveHost On
<Proxy balancer://laravels>
BalancerMember http://192.168.1.1:5200 loadfactor=7
#BalancerMember http://192.168.1.2:5200 loadfactor=3
#BalancerMember http://192.168.1.3:5200 loadfactor=1 status=+H
ProxySet lbmethod=byrequests
</Proxy>
#ProxyPass / balancer://laravels/
#ProxyPassReverse / balancer://laravels/
# Apache handles the static resources, LaravelS handles the dynamic resource.
RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-d
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-f
RewriteRule ^/(.*)$ balancer://laravels%{REQUEST_URI} [P,L]
ErrorLog ${APACHE_LOG_DIR}/www.laravels.com.error.log
CustomLog ${APACHE_LOG_DIR}/www.laravels.com.access.log combined
</VirtualHost>
The Listening address of WebSocket Sever is the same as Http Server.
1.Create WebSocket Handler class, and implement interface WebSocketHandlerInterface
.The instant is automatically instantiated when start, you do not need to manually create it.
namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\Http\Response;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
/**
* @see https://www.swoole.co.uk/docs/modules/swoole-websocket-server
*/
class WebSocketService implements WebSocketHandlerInterface
{
// Declare constructor without parameters
public function __construct()
{
}
// public function onHandShake(Request $request, Response $response)
// {
// Custom handshake: https://www.swoole.co.uk/docs/modules/swoole-websocket-server-on-handshake
// The onOpen event will be triggered automatically after a successful handshake
// }
public function onOpen(Server $server, Request $request)
{
// Before the onOpen event is triggered, the HTTP request to establish the WebSocket has passed the Laravel route,
// so Laravel's Request, Auth information are readable, Session is readable and writable, but only in the onOpen event.
// \Log::info('New WebSocket connection', [$request->fd, request()->all(), session()->getId(), session('xxx'), session(['yyy' => time()])]);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$server->push($request->fd, 'Welcome to LaravelS');
}
public function onMessage(Server $server, Frame $frame)
{
// \Log::info('Received message', [$frame->fd, $frame->data, $frame->opcode, $frame->finish]);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$server->push($frame->fd, date('Y-m-d H:i:s'));
}
public function onClose(Server $server, $fd, $reactorId)
{
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
2.Modify config/laravels.php
.
// ...
'websocket' => [
'enable' => true, // Note: set enable to true
'handler' => \App\Services\WebSocketService::class,
],
'swoole' => [
//...
// Must set dispatch_mode in (2, 4, 5), see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
'dispatch_mode' => 2,
//...
],
// ...
3.Use SwooleTable
to bind FD & UserId, optional, Swoole Table Demo. Also you can use the other global storage services, like Redis/Memcached/MySQL, but be careful that FD will be possible conflicting between multiple Swoole Servers
.
4.Cooperate with Nginx (Recommended)
Refer WebSocket Proxy
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream swoole {
# Connect IP:Port
server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
# Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
#server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
#server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
#server 192.168.1.2:5200 backup;
keepalive 16;
}
server {
listen 80;
# Don't forget to bind the host
server_name laravels.com;
root /yourpath/laravel-s-test/public;
access_log /yourpath/log/nginx/$server_name.access.log main;
autoindex off;
index index.html index.htm;
# Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
location / {
try_files $uri @laravels;
}
# Response 404 directly when request the PHP file, to avoid exposing public/*.php
#location ~* \.php$ {
# return 404;
#}
# Http and WebSocket are concomitant, Nginx identifies them by "location"
# !!! The location of WebSocket is "/ws"
# Javascript: var ws = new WebSocket("ws://laravels.com/ws");
location =/ws {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout: Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds; At the same time, this close behavior is also affected by heartbeat setting of Swoole.
# proxy_read_timeout 60s;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://swoole;
}
location @laravels {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout 60s;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
proxy_pass http://swoole;
}
}
5.Heartbeat setting
Heartbeat setting of Swoole
// config/laravels.php
'swoole' => [
//...
// All connections are traversed every 60 seconds. If a connection does not send any data to the server within 600 seconds, the connection will be forced to close.
'heartbeat_idle_time' => 600,
'heartbeat_check_interval' => 60,
//...
],
Proxy read timeout of Nginx
# Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds
proxy_read_timeout 60s;
6.Push data in controller
namespace App\Http\Controllers;
class TestController extends Controller
{
public function push()
{
$fd = 1; // Find fd by userId from a map [userId=>fd].
/**@var \Swoole\WebSocket\Server $swoole */
$swoole = app('swoole');
$success = $swoole->push($fd, 'Push data to fd#1 in Controller');
var_dump($success);
}
}
Usually, you can reset/destroy some
global/static
variables, or change the currentRequest/Response
object.
laravels.received_request
After LaravelS parsed Swoole\Http\Request
to Illuminate\Http\Request
, before Laravel's Kernel handles this request.
// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen().
$events->listen('laravels.received_request', function (\Illuminate\Http\Request $req, $app) {
$req->query->set('get_key', 'hhxsv5');// Change query of request
$req->request->set('post_key', 'hhxsv5'); // Change post of request
});
laravels.generated_response
After Laravel's Kernel handled the request, before LaravelS parses Illuminate\Http\Response
to Swoole\Http\Response
.
// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen().
$events->listen('laravels.generated_response', function (\Illuminate\Http\Request $req, \Symfony\Component\HttpFoundation\Response $rsp, $app) {
$rsp->headers->set('header-key', 'hhxsv5');// Change header of response
});
This feature depends on
AsyncTask
ofSwoole
, your need to setswoole.task_worker_num
inconfig/laravels.php
firstly. The performance of asynchronous event processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.
1.Create event class.
use Hhxsv5\LaravelS\Swoole\Task\Event;
class TestEvent extends Event
{
protected $listeners = [
// Listener list
TestListener1::class,
// TestListener2::class,
];
private $data;
public function __construct($data)
{
$this->data = $data;
}
public function getData()
{
return $this->data;
}
}
2.Create listener class.
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Task\Listener;
class TestListener1 extends Listener
{
/**
* @var TestEvent
*/
protected $event;
public function handle()
{
\Log::info(__CLASS__ . ':handle start', [$this->event->getData()]);
sleep(2);// Simulate the slow codes
// Deliver task in CronJob, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
3.Fire event.
// Create instance of event and fire it, "fire" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Event;
$event = new TestEvent('event data');
// $event->delay(10); // Delay 10 seconds to fire event
// $event->setTries(3); // When an error occurs, try 3 times in total
$success = Event::fire($event);
var_dump($success);// Return true if sucess, otherwise false
This feature depends on
AsyncTask
ofSwoole
, your need to setswoole.task_worker_num
inconfig/laravels.php
firstly. The performance of task processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.
1.Create task class.
use Hhxsv5\LaravelS\Swoole\Task\Task;
class TestTask extends Task
{
private $data;
private $result;
public function __construct($data)
{
$this->data = $data;
}
// The logic of task handling, run in task process, CAN NOT deliver task
public function handle()
{
\Log::info(__CLASS__ . ':handle start', [$this->data]);
sleep(2);// Simulate the slow codes
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$this->result = 'the result of ' . $this->data;
}
// Optional, finish event, the logic of after task handling, run in worker process, CAN deliver task
public function finish()
{
\Log::info(__CLASS__ . ':finish start', [$this->result]);
Task::deliver(new TestTask2('task2 data')); // Deliver the other task
}
}
2.Deliver task.
// Create instance of TestTask and deliver it, "deliver" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Task;
$task = new TestTask('task data');
// $task->delay(3);// delay 3 seconds to deliver task
// $task->setTries(3); // When an error occurs, try 3 times in total
$ret = Task::deliver($task);
var_dump($ret);// Return true if sucess, otherwise false
Wrapper cron job base on Swoole's Millisecond Timer, replace
Linux
Crontab
.
1.Create cron job class.
namespace App\Jobs\Timer;
use App\Tasks\TestTask;
use Swoole\Coroutine;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Timer\CronJob;
class TestCronJob extends CronJob
{
protected $i = 0;
// !!! The `interval` and `isImmediate` of cron job can be configured in two ways(pick one of two): one is to overload the corresponding method, and the other is to pass parameters when registering cron job.
// --- Override the corresponding method to return the configuration: begin
public function interval()
{
return 1000;// Run every 1000ms
}
public function isImmediate()
{
return false;// Whether to trigger `run` immediately after setting up
}
// --- Override the corresponding method to return the configuration: end
public function run()
{
\Log::info(__METHOD__, ['start', $this->i, microtime(true)]);
// do something
// sleep(1); // Swoole < 2.1
Coroutine::sleep(1); // Swoole>=2.1 Coroutine will be automatically created for run().
$this->i++;
\Log::info(__METHOD__, ['end', $this->i, microtime(true)]);
if ($this->i >= 10) { // Run 10 times only
\Log::info(__METHOD__, ['stop', $this->i, microtime(true)]);
$this->stop(); // Stop this cron job, but it will run again after restart/reload.
// Deliver task in CronJob, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
}
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
2.Register cron job.
// Register cron jobs in file "config/laravels.php"
[
// ...
'timer' => [
'enable' => true, // Enable Timer
'jobs' => [ // The list of cron job
// Enable LaravelScheduleJob to run `php artisan schedule:run` every 1 minute, replace Linux Crontab
// \Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob::class,
// Two ways to configure parameters:
// [\App\Jobs\Timer\TestCronJob::class, [1000, true]], // Pass in parameters when registering
\App\Jobs\Timer\TestCronJob::class, // Override the corresponding method to return the configuration
],
'max_wait_time' => 5, // Max waiting time of reloading
// Enable the global lock to ensure that only one instance starts the timer when deploying multiple instances. This feature depends on Redis, please see https://laravel.com/docs/7.x/redis
'global_lock' => false,
'global_lock_key' => config('app.name', 'Laravel'),
],
// ...
];
3.Note: it will launch multiple timers when build the server cluster, so you need to make sure that launch one timer only to avoid running repetitive task.
4.LaravelS v3.4.0
starts to support the hot restart [Reload] Timer
process. After LaravelS receives the SIGUSR1
signal, it waits for max_wait_time
(default 5) seconds to end the process, then the Manager
process will pull up the Timer
process again.
5.If you only need to use minute-level
scheduled tasks, it is recommended to enable Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob
instead of Linux Crontab, so that you can follow the coding habits of Laravel task scheduling and configure Kernel
.
// app/Console/Kernel.php
protected function schedule(Schedule $schedule)
{
// runInBackground() will start a new child process to execute the task. This is asynchronous and will not affect the execution timing of other tasks.
$schedule->command(TestCommand::class)->runInBackground()->everyMinute();
}
Via inotify
, support Linux only.
1.Install inotify extension.
2.Turn on the switch in Settings.
3.Notice: Modify the file only in Linux
to receive the file change events. It's recommended to use the latest Docker. Vagrant Solution.
Via fswatch
, support OS X/Linux/Windows.
1.Install fswatch.
2.Run command in your project root directory.
# Watch current directory
./bin/fswatch
# Watch app directory
./bin/fswatch ./app
Via inotifywait
, support Linux.
1.Install inotify-tools.
2.Run command in your project root directory.
# Watch current directory
./bin/inotify
# Watch app directory
./bin/inotify ./app
When the above methods does not work, the ultimate solution: set max_request=1,worker_num=1
, so that Worker
process will restart after processing a request. The performance of this method is very poor, so only development environment use
.
SwooleServer
in your project/**
* $swoole is the instance of `Swoole\WebSocket\Server` if enable WebSocket server, otherwise `Swoole\Http\Server`
* @var \Swoole\WebSocket\Server|\Swoole\Http\Server $swoole
*/
$swoole = app('swoole');
var_dump($swoole->stats());
$swoole->push($fd, 'Push WebSocket message');
SwooleTable
1.Define Table, support multiple.
All defined tables will be created before Swoole starting.
// in file "config/laravels.php"
[
// ...
'swoole_tables' => [
// Scene:bind UserId & FD in WebSocket
'ws' => [// The Key is table name, will add suffix "Table" to avoid naming conflicts. Here defined a table named "wsTable"
'size' => 102400,// The max size
'column' => [// Define the columns
['name' => 'value', 'type' => \Swoole\Table::TYPE_INT, 'size' => 8],
],
],
//...Define the other tables
],
// ...
];
2.Access Table
: all table instances will be bound on SwooleServer
, access by app('swoole')->xxxTable
.
namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
class WebSocketService implements WebSocketHandlerInterface
{
/**@var \Swoole\Table $wsTable */
private $wsTable;
public function __construct()
{
$this->wsTable = app('swoole')->wsTable;
}
// Scene:bind UserId & FD in WebSocket
public function onOpen(Server $server, Request $request)
{
// var_dump(app('swoole') === $server);// The same instance
/**
* Get the currently logged in user
* This feature requires that the path to establish a WebSocket connection go through middleware such as Authenticate.
* E.g:
* Browser side: var ws = new WebSocket("ws://127.0.0.1:5200/ws");
* Then the /ws route in Laravel needs to add the middleware like Authenticate.
* Route::get('/ws', function () {
* // Respond any content with status code 200
* return 'websocket';
* })->middleware(['auth']);
*/
// $user = Auth::user();
// $userId = $user ? $user->id : 0; // 0 means a guest user who is not logged in
$userId = mt_rand(1000, 10000);
// if (!$userId) {
// // Disconnect the connections of unlogged users
// $server->disconnect($request->fd);
// return;
// }
$this->wsTable->set('uid:' . $userId, ['value' => $request->fd]);// Bind map uid to fd
$this->wsTable->set('fd:' . $request->fd, ['value' => $userId]);// Bind map fd to uid
$server->push($request->fd, "Welcome to LaravelS #{$request->fd}");
}
public function onMessage(Server $server, Frame $frame)
{
// Broadcast
foreach ($this->wsTable as $key => $row) {
if (strpos($key, 'uid:') === 0 && $server->isEstablished($row['value'])) {
$content = sprintf('Broadcast: new message "%s" from #%d', $frame->data, $frame->fd);
$server->push($row['value'], $content);
}
}
}
public function onClose(Server $server, $fd, $reactorId)
{
$uid = $this->wsTable->get('fd:' . $fd);
if ($uid !== false) {
$this->wsTable->del('uid:' . $uid['value']); // Unbind uid map
}
$this->wsTable->del('fd:' . $fd);// Unbind fd map
$server->push($fd, "Goodbye #{$fd}");
}
}
For more information, please refer to Swoole Server AddListener
To make our main server support more protocols not just Http and WebSocket, we bring the feature multi-port mixed protocol
of Swoole in LaravelS and name it Socket
. Now, you can build TCP/UDP
applications easily on top of Laravel.
Create Socket
handler class, and extend Hhxsv5\LaravelS\Swoole\Socket\{TcpSocket|UdpSocket|Http|WebSocket}
.
namespace App\Sockets;
use Hhxsv5\LaravelS\Swoole\Socket\TcpSocket;
use Swoole\Server;
class TestTcpSocket extends TcpSocket
{
public function onConnect(Server $server, $fd, $reactorId)
{
\Log::info('New TCP connection', [$fd]);
$server->send($fd, 'Welcome to LaravelS.');
}
public function onReceive(Server $server, $fd, $reactorId, $data)
{
\Log::info('Received data', [$fd, $data]);
$server->send($fd, 'LaravelS: ' . $data);
if ($data === "quit\r\n") {
$server->send($fd, 'LaravelS: bye' . PHP_EOL);
$server->close($fd);
}
}
public function onClose(Server $server, $fd, $reactorId)
{
\Log::info('Close TCP connection', [$fd]);
$server->send($fd, 'Goodbye');
}
}
These Socket
connections share the same worker processes with your HTTP
/WebSocket
connections. So it won't be a problem at all if you want to deliver tasks, use SwooleTable
, even Laravel components such as DB, Eloquent and so on. At the same time, you can access Swoole\Server\Port
object directly by member property swoolePort
.
public function onReceive(Server $server, $fd, $reactorId, $data)
{
$port = $this->swoolePort; // Get the `Swoole\Server\Port` object
}
namespace App\Http\Controllers;
class TestController extends Controller
{
public function test()
{
/**@var \Swoole\Http\Server|\Swoole\WebSocket\Server $swoole */
$swoole = app('swoole');
// $swoole->ports: Traverse all Port objects, https://www.swoole.co.uk/docs/modules/swoole-server/multiple-ports
$port = $swoole->ports[0]; // Get the `Swoole\Server\Port` object, $port[0] is the port of the main server
foreach ($port->connections as $fd) { // Traverse all connections
// $swoole->send($fd, 'Send tcp message');
// if($swoole->isEstablished($fd)) {
// $swoole->push($fd, 'Send websocket message');
// }
}
}
}
Register Sockets.
// Edit `config/laravels.php`
//...
'sockets' => [
[
'host' => '127.0.0.1',
'port' => 5291,
'type' => SWOOLE_SOCK_TCP,// Socket type: SWOOLE_SOCK_TCP/SWOOLE_SOCK_TCP6/SWOOLE_SOCK_UDP/SWOOLE_SOCK_UDP6/SWOOLE_UNIX_DGRAM/SWOOLE_UNIX_STREAM
'settings' => [// Swoole settings:https://www.swoole.co.uk/docs/modules/swoole-server-methods#swoole_server-addlistener
'open_eof_check' => true,
'package_eof' => "\r\n",
],
'handler' => \App\Sockets\TestTcpSocket::class,
'enable' => true, // whether to enable, default true
],
],
About the heartbeat configuration, it can only be set on the main server
and cannot be configured on Socket
, but the Socket
inherits the heartbeat configuration of the main server
.
For TCP socket, onConnect
and onClose
events will be blocked when dispatch_mode
of Swoole is 1/3
, so if you want to unblock these two events please set dispatch_mode
to 2/4/5
.
'swoole' => [
//...
'dispatch_mode' => 2,
//...
];
Test.
TCP: telnet 127.0.0.1 5291
UDP: [Linux] echo "Hello LaravelS" > /dev/udp/127.0.0.1/5292
Register example of other protocols.
'sockets' => [
[
'host' => '0.0.0.0',
'port' => 5292,
'type' => SWOOLE_SOCK_UDP,
'settings' => [
'open_eof_check' => true,
'package_eof' => "\r\n",
],
'handler' => \App\Sockets\TestUdpSocket::class,
],
],
'sockets' => [
[
'host' => '0.0.0.0',
'port' => 5293,
'type' => SWOOLE_SOCK_TCP,
'settings' => [
'open_http_protocol' => true,
],
'handler' => \App\Sockets\TestHttp::class,
],
],
turn on WebSocket
, that is, set websocket.enable
to true
.'sockets' => [
[
'host' => '0.0.0.0',
'port' => 5294,
'type' => SWOOLE_SOCK_TCP,
'settings' => [
'open_http_protocol' => true,
'open_websocket_protocol' => true,
],
'handler' => \App\Sockets\TestWebSocket::class,
],
],
Warning: The order of code execution in the coroutine is out of order. The data of the request level should be isolated by the coroutine ID. However, there are many singleton and static attributes in Laravel/Lumen, the data between different requests will affect each other, it's Unsafe
. For example, the database connection is a singleton, the same database connection shares the same PDO resource. This is fine in the synchronous blocking mode, but it does not work in the asynchronous coroutine mode. Each query needs to create different connections and maintain IO state of different connections, which requires a connection pool.
DO NOT
enable the coroutine, only the custom process can use the coroutine.
Support developers to create special work processes for monitoring, reporting, or other special tasks. Refer addProcess.
Create Proccess class, implements CustomProcessInterface.
namespace App\Processes;
use App\Tasks\TestTask;
use Hhxsv5\LaravelS\Swoole\Process\CustomProcessInterface;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Swoole\Coroutine;
use Swoole\Http\Server;
use Swoole\Process;
class TestProcess implements CustomProcessInterface
{
/**
* @var bool Quit tag for Reload updates
*/
private static $quit = false;
public static function callback(Server $swoole, Process $process)
{
// The callback method cannot exit. Once exited, Manager process will automatically create the process
while (!self::$quit) {
\Log::info('Test process: running');
// sleep(1); // Swoole < 2.1
Coroutine::sleep(1); // Swoole>=2.1: Coroutine & Runtime will be automatically enabled for callback().
// Deliver task in custom process, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
// The upper layer will catch the exception thrown in the callback and record it in the Swoole log, and then this process will exit. The Manager process will re-create the process after 3 seconds, so developers need to try/catch to catch the exception by themselves to avoid frequent process creation.
// throw new \Exception('an exception');
}
}
// Requirements: LaravelS >= v3.4.0 & callback() must be async non-blocking program.
public static function onReload(Server $swoole, Process $process)
{
// Stop the process...
// Then end process
\Log::info('Test process: reloading');
self::$quit = true;
// $process->exit(0); // Force exit process
}
// Requirements: LaravelS >= v3.7.4 & callback() must be async non-blocking program.
public static function onStop(Server $swoole, Process $process)
{
// Stop the process...
// Then end process
\Log::info('Test process: stopping');
self::$quit = true;
// $process->exit(0); // Force exit process
}
}
Register TestProcess.
// Edit `config/laravels.php`
// ...
'processes' => [
'test' => [ // Key name is process name
'class' => \App\Processes\TestProcess::class,
'redirect' => false, // Whether redirect stdin/stdout, true or false
'pipe' => 0, // The type of pipeline, 0: no pipeline 1: SOCK_STREAM 2: SOCK_DGRAM
'enable' => true, // Whether to enable, default true
//'num' => 3 // To create multiple processes of this class, default is 1
//'queue' => [ // Enable message queue as inter-process communication, configure empty array means use default parameters
// 'msg_key' => 0, // The key of the message queue. Default: ftok(__FILE__, 1).
// 'mode' => 2, // Communication mode, default is 2, which means contention mode
// 'capacity' => 8192, // The length of a single message, is limited by the operating system kernel parameters. The default is 8192, and the maximum is 65536
//],
//'restart_interval' => 5, // After the process exits abnormally, how many seconds to wait before restarting the process, default 5 seconds
],
],
Note: The callback() cannot quit. If quit, the Manager process will re-create the process.
Example: Write data to a custom process.
// config/laravels.php
'processes' => [
'test' => [
'class' => \App\Processes\TestProcess::class,
'redirect' => false,
'pipe' => 1,
],
],
// app/Processes/TestProcess.php
public static function callback(Server $swoole, Process $process)
{
while ($data = $process->read()) {
\Log::info('TestProcess: read data', [$data]);
$process->write('TestProcess: ' . $data);
}
}
// app/Http/Controllers/TestController.php
public function testProcessWrite()
{
/**@var \Swoole\Process $process */
$process = app('swoole')->customProcesses['test'];
$process->write('TestController: write data' . time());
var_dump($process->read());
}
LaravelS
will pull theApollo
configuration and write it to the.env
file when starting. At the same time,LaravelS
will start the custom processapollo
to monitor the configuration and automaticallyreload
when the configuration changes.
Enable Apollo: add --enable-apollo
and Apollo parameters to the startup parameters.
php bin/laravels start --enable-apollo --apollo-server=http://127.0.0.1:8080 --apollo-app-id=LARAVEL-S-TEST
Support hot updates(optional).
// Edit `config/laravels.php`
'processes' => Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),
// When there are other custom process configurations
'processes' => [
'test' => [
'class' => \App\Processes\TestProcess::class,
'redirect' => false,
'pipe' => 1,
],
// ...
] + Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),
List of available parameters.
Parameter | Description | Default | Demo |
---|---|---|---|
apollo-server | Apollo server URL | - | --apollo-server=http://127.0.0.1:8080 |
apollo-app-id | Apollo APP ID | - | --apollo-app-id=LARAVEL-S-TEST |
apollo-namespaces | The namespace to which the APP belongs, support specify the multiple | application | --apollo-namespaces=application --apollo-namespaces=env |
apollo-cluster | The cluster to which the APP belongs | default | --apollo-cluster=default |
apollo-client-ip | IP of current instance, can also be used for grayscale publishing | Local intranet IP | --apollo-client-ip=10.2.1.83 |
apollo-pull-timeout | Timeout time(seconds) when pulling configuration | 5 | --apollo-pull-timeout=5 |
apollo-backup-old-env | Whether to backup the old configuration file when updating the configuration file .env | false | --apollo-backup-old-env |
Support Prometheus monitoring and alarm, Grafana visually view monitoring metrics. Please refer to Docker Compose for the environment construction of Prometheus and Grafana.
Require extension APCu >= 5.0.0, please install it by pecl install apcu
.
Copy the configuration file prometheus.php
to the config
directory of your project. Modify the configuration as appropriate.
# Execute commands in the project root directory
cp vendor/hhxsv5/laravel-s/config/prometheus.php config/
If your project is Lumen
, you also need to manually load the configuration $app->configure('prometheus');
in bootstrap/app.php
.
Configure global
middleware: Hhxsv5\LaravelS\Components\Prometheus\RequestMiddleware::class
. In order to count the request time consumption as accurately as possible, RequestMiddleware
must be the first
global middleware, which needs to be placed in front of other middleware.
Register ServiceProvider: Hhxsv5\LaravelS\Components\Prometheus\ServiceProvider::class
.
Configure the CollectorProcess in config/laravels.php
to collect the metrics of Swoole Worker/Task/Timer processes regularly.
'processes' => Hhxsv5\LaravelS\Components\Prometheus\CollectorProcess::getDefinition(),
Create the route to output metrics.
use Hhxsv5\LaravelS\Components\Prometheus\Exporter;
Route::get('/actuator/prometheus', function () {
$result = app(Exporter::class)->render();
return response($result, 200, ['Content-Type' => Exporter::REDNER_MIME_TYPE]);
});
Complete the configuration of Prometheus and start it.
global:
scrape_interval: 5s
scrape_timeout: 5s
evaluation_interval: 30s
scrape_configs:
- job_name: laravel-s-test
honor_timestamps: true
metrics_path: /actuator/prometheus
scheme: http
follow_redirects: true
static_configs:
- targets:
- 127.0.0.1:5200 # The ip and port of the monitored service
# Dynamically discovered using one of the supported service-discovery mechanisms
# https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
# - job_name: laravels-eureka
# honor_timestamps: true
# scrape_interval: 5s
# metrics_path: /actuator/prometheus
# scheme: http
# follow_redirects: true
# eureka_sd_configs:
# - server: http://127.0.0.1:8080/eureka
# follow_redirects: true
# refresh_interval: 5s
Start Grafana, then import panel json.
Supported events:
Event | Interface | When happened |
---|---|---|
ServerStart | Hhxsv5\LaravelS\Swoole\Events\ServerStartInterface | Occurs when the Master process is starting, this event should not handle complex business logic, and can only do some simple work of initialization . |
ServerStop | Hhxsv5\LaravelS\Swoole\Events\ServerStopInterface | Occurs when the server exits normally, CANNOT use async or coroutine related APIs in this event . |
WorkerStart | Hhxsv5\LaravelS\Swoole\Events\WorkerStartInterface | Occurs after the Worker/Task process is started, and the Laravel initialization has been completed. |
WorkerStop | Hhxsv5\LaravelS\Swoole\Events\WorkerStopInterface | Occurs after the Worker/Task process exits normally |
WorkerError | Hhxsv5\LaravelS\Swoole\Events\WorkerErrorInterface | Occurs when an exception or fatal error occurs in the Worker/Task process |
1.Create an event class to implement the corresponding interface.
namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\ServerStartInterface;
use Swoole\Atomic;
use Swoole\Http\Server;
class ServerStartEvent implements ServerStartInterface
{
public function __construct()
{
}
public function handle(Server $server)
{
// Initialize a global counter (available across processes)
$server->atomicCount = new Atomic(2233);
// Invoked in controller: app('swoole')->atomicCount->get();
}
}
namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\WorkerStartInterface;
use Swoole\Http\Server;
class WorkerStartEvent implements WorkerStartInterface
{
public function __construct()
{
}
public function handle(Server $server, $workerId)
{
// Initialize a database connection pool
// DatabaseConnectionPool::init();
}
}
2.Configuration.
// Edit `config/laravels.php`
'event_handlers' => [
'ServerStart' => [\App\Events\ServerStartEvent::class], // Trigger events in array order
'WorkerStart' => [\App\Events\WorkerStartEvent::class],
],
1.Modify bootstrap/app.php
and set the storage directory. Because the project directory is read-only, the /tmp
directory can only be read and written.
$app->useStoragePath(env('APP_STORAGE_PATH', '/tmp/storage'));
2.Create a shell script laravels_bootstrap
and grant executable permission
.
#!/usr/bin/env bash
set +e
# Create storage-related directories
mkdir -p /tmp/storage/app/public
mkdir -p /tmp/storage/framework/cache
mkdir -p /tmp/storage/framework/sessions
mkdir -p /tmp/storage/framework/testing
mkdir -p /tmp/storage/framework/views
mkdir -p /tmp/storage/logs
# Set the environment variable APP_STORAGE_PATH, please make sure it's the same as APP_STORAGE_PATH in .env
export APP_STORAGE_PATH=/tmp/storage
# Start LaravelS
php bin/laravels start
3.Configure template.xml
.
ROSTemplateFormatVersion: '2015-09-01'
Transform: 'Aliyun::Serverless-2018-04-03'
Resources:
laravel-s-demo:
Type: 'Aliyun::Serverless::Service'
Properties:
Description: 'LaravelS Demo for Serverless'
fc-laravel-s:
Type: 'Aliyun::Serverless::Function'
Properties:
Handler: laravels.handler
Runtime: custom
MemorySize: 512
Timeout: 30
CodeUri: ./
InstanceConcurrency: 10
EnvironmentVariables:
BOOTSTRAP_FILE: laravels_bootstrap
Under FPM mode, singleton instances will be instantiated and recycled in every request, request start=>instantiate instance=>request end=>recycled instance.
Under Swoole Server, All singleton instances will be held in memory, different lifetime from FPM, request start=>instantiate instance=>request end=>do not recycle singleton instance. So need developer to maintain status of singleton instances in every request.
Common solutions:
Write a XxxCleaner
class to clean up the singleton object state. This class implements the interface Hhxsv5\LaravelS\Illuminate\Cleaners\CleanerInterface
and then registers it in cleaners
of laravels.php
.
Reset
status of singleton instances by Middleware
.
Re-register ServiceProvider
, add XxxServiceProvider
into register_providers
of file laravels.php
. So that reinitialize singleton instances in every request Refer.
Known issues: a package of known issues and solutions.
Logging; if you want to output to the console, you can use stderr
, Log::channel('stderr')->debug('debug message').
Laravel Dump Server(Laravel 5.7 has been integrated by default).
Read request by Illuminate\Http\Request
Object, $_ENV is readable, $_SERVER is partially readable, CANNOT USE
$_GET/$_POST/$_FILES/$_COOKIE/$_REQUEST/$_SESSION/$GLOBALS.
public function form(\Illuminate\Http\Request $request)
{
$name = $request->input('name');
$all = $request->all();
$sessionId = $request->cookie('sessionId');
$photo = $request->file('photo');
// Call getContent() to get the raw POST body, instead of file_get_contents('php://input')
$rawContent = $request->getContent();
//...
}
Respond by Illuminate\Http\Response
Object, compatible with echo/vardump()/print_r(),CANNOT USE
functions dd()/exit()/die()/header()/setcookie()/http_response_code().
public function json()
{
return response()->json(['time' => time()])->header('header1', 'value1')->withCookie('c1', 'v1');
}
Singleton connection
will be resident in memory, it is recommended to turn on persistent connection
for better performance.
will
reconnect automatically immediately
after disconnect.// config/database.php
'connections' => [
'my_conn' => [
'driver' => 'mysql',
'host' => env('DB_MY_CONN_HOST', 'localhost'),
'port' => env('DB_MY_CONN_PORT', 3306),
'database' => env('DB_MY_CONN_DATABASE', 'forge'),
'username' => env('DB_MY_CONN_USERNAME', 'forge'),
'password' => env('DB_MY_CONN_PASSWORD', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'strict' => false,
'options' => [
// Enable persistent connection
\PDO::ATTR_PERSISTENT => true,
],
],
],
won't
reconnect automatically immediately
after disconnect, and will throw an exception about lost connection, reconnect next time. You need to make sure that SELECT DB
correctly before operating Redis every time.// config/database.php
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'), // It is recommended to use phpredis for better performance.
'default' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
'persistent' => true, // Enable persistent connection
],
],
Avoid using global variables. If necessary, please clean or reset them manually.
Infinitely appending element into static
/global
variable will lead to OOM(Out of Memory).
class Test
{
public static $array = [];
public static $string = '';
}
// Controller
public function test(Request $req)
{
// Out of Memory
Test::$array[] = $req->input('param1');
Test::$string .= $req->input('param2');
}
Memory leak detection method
Modify config/laravels.php
: worker_num=1, max_request=1000000
, remember to change it back after test;
Add routing /debug-memory-leak
without route middleware
to observe the memory changes of the Worker
process;
Route::get('/debug-memory-leak', function () {
global $previous;
$current = memory_get_usage();
$stats = [
'prev_mem' => $previous,
'curr_mem' => $current,
'diff_mem' => $current - $previous,
];
$previous = $current;
return $stats;
});
Start LaravelS
and request /debug-memory-leak
until diff_mem
is less than or equal to zero; if diff_mem
is always greater than zero, it means that there may be a memory leak in Global Middleware
or Laravel Framework
;
After completing Step 3
, alternately
request the business routes and /debug-memory-leak
(It is recommended to use ab
/wrk
to make a large number of requests for business routes), the initial increase in memory is normal. After a large number of requests for the business routes, if diff_mem
is always greater than zero and curr_mem
continues to increase, there is a high probability of memory leak; If curr_mem
always changes within a certain range and does not continue to increase, there is a low probability of memory leak.
If you still can't solve it, max_request is the last guarantee.
Author: hhxsv5
Source Code: https://github.com/hhxsv5/laravel-s
License: MIT License
1648667160
🚀 LaravelS is
an out-of-the-box adapter
between Swoole and Laravel/Lumen.
Please Watch
this repository to get the latest updates.
_ _ _____
| | | |/ ____|
| | __ _ _ __ __ ___ _____| | (___
| | / _` | '__/ _` \ \ / / _ \ |\___ \
| |___| (_| | | | (_| |\ V / __/ |____) |
|______\__,_|_| \__,_| \_/ \___|_|_____/
Built-in Http/WebSocket server
Memory resident
Gracefully reload
Automatically reload after modifying code
Support Laravel/Lumen both, good compatibility
Simple & Out of the box
Which is the fastest web framework?
TechEmpower Framework Benchmarks
Dependency | Requirement |
---|---|
PHP | >= 5.5.9 Recommend PHP7+ |
Swoole | >= 1.7.19 No longer support PHP5 since 2.0.12 Recommend 4.5.0+ |
Laravel/Lumen | >= 5.1 Recommend 8.0+ |
1.Require package via Composer(packagist).
composer require "hhxsv5/laravel-s:~3.7.0" -vvv
# Make sure that your composer.lock file is under the VCS
2.Register service provider(pick one of two).
Laravel
: in config/app.php
file, Laravel 5.5+ supports package discovery automatically, you should skip this step
'providers' => [
//...
Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class,
],
Lumen
: in bootstrap/app.php
file
$app->register(Hhxsv5\LaravelS\Illuminate\LaravelSServiceProvider::class);
3.Publish configuration and binaries.
After upgrading LaravelS, you need to republish; click here to see the change notes of each version.
php artisan laravels publish
# Configuration: config/laravels.php
# Binary: bin/laravels bin/fswatch bin/inotify
4.Change config/laravels.php
: listen_ip, listen_port, refer Settings.
5.Performance tuning
Number of Workers: LaravelS uses Swoole's Synchronous IO
mode, the larger the worker_num
setting, the better the concurrency performance, but it will cause more memory usage and process switching overhead. If one request takes 100ms, in order to provide 1000QPS concurrency, at least 100 Worker processes need to be configured. The calculation method is: worker_num = 1000QPS/(1s/1ms) = 100, so incremental pressure testing is needed to calculate the best worker_num
.
Please read the notices carefully before running
, Important notices(IMPORTANT).
php bin/laravels {start|stop|restart|reload|info|help}
.Command | Description |
---|---|
start | Start LaravelS, list the processes by "ps -ef|grep laravels" |
stop | Stop LaravelS, and trigger the method onStop of Custom process |
restart | Restart LaravelS: Stop gracefully before starting; The service is unavailable until startup is complete |
reload | Reload all Task/Worker/Timer processes which contain your business codes, and trigger the method onReload of Custom process, CANNOT reload Master/Manger processes. After modifying config/laravels.php , you only have to call restart to restart |
info | Display component version information |
help | Display help information |
start
and restart
.Option | Description |
---|---|
-d|--daemonize | Run as a daemon, this option will override the swoole.daemonize setting in laravels.php |
-e|--env | The environment the command should run under, such as --env=testing will use the configuration file .env.testing firstly, this feature requires Laravel 5.2+ |
-i|--ignore | Ignore checking PID file of Master process |
-x|--x-version | The version(branch) of the current project, stored in $_ENV/$_SERVER, access via $_ENV['X_VERSION'] $_SERVER['X_VERSION'] $request->server->get('X_VERSION') |
Runtime
files: start
will automatically execute php artisan laravels config
and generate these files, developers generally don't need to pay attention to them, it's recommended to add them to .gitignore
.File | Description |
---|---|
storage/laravels.conf | LaravelS's runtime configuration file |
storage/laravels.pid | PID file of Master process |
storage/laravels-timer-process.pid | PID file of the Timer process |
storage/laravels-custom-processes.pid | PID file of all custom processes |
It is recommended to supervise the main process through Supervisord, the premise is without option
-d
and to setswoole.daemonize
tofalse
.
[program:laravel-s-test]
directory=/var/www/laravel-s-test
command=/usr/local/bin/php bin/laravels start -i
numprocs=1
autostart=true
autorestart=true
startretries=3
user=www-data
redirect_stderr=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
Demo.
gzip on;
gzip_min_length 1024;
gzip_comp_level 2;
gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml;
gzip_vary on;
gzip_disable "msie6";
upstream swoole {
# Connect IP:Port
server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
# Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
#server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
#server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
#server 192.168.1.2:5200 backup;
keepalive 16;
}
server {
listen 80;
# Don't forget to bind the host
server_name laravels.com;
root /yourpath/laravel-s-test/public;
access_log /yourpath/log/nginx/$server_name.access.log main;
autoindex off;
index index.html index.htm;
# Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
location / {
try_files $uri @laravels;
}
# Response 404 directly when request the PHP file, to avoid exposing public/*.php
#location ~* \.php$ {
# return 404;
#}
location @laravels {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout 120s;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
# "swoole" is the upstream
proxy_pass http://swoole;
}
}
LoadModule proxy_module /yourpath/modules/mod_proxy.so
LoadModule proxy_balancer_module /yourpath/modules/mod_proxy_balancer.so
LoadModule lbmethod_byrequests_module /yourpath/modules/mod_lbmethod_byrequests.so
LoadModule proxy_http_module /yourpath/modules/mod_proxy_http.so
LoadModule slotmem_shm_module /yourpath/modules/mod_slotmem_shm.so
LoadModule rewrite_module /yourpath/modules/mod_rewrite.so
LoadModule remoteip_module /yourpath/modules/mod_remoteip.so
LoadModule deflate_module /yourpath/modules/mod_deflate.so
<IfModule deflate_module>
SetOutputFilter DEFLATE
DeflateCompressionLevel 2
AddOutputFilterByType DEFLATE text/html text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml application/x-httpd-php image/jpeg image/gif image/png font/ttf font/otf image/svg+xml
</IfModule>
<VirtualHost *:80>
# Don't forget to bind the host
ServerName www.laravels.com
ServerAdmin hhxsv5@sina.com
DocumentRoot /yourpath/laravel-s-test/public;
DirectoryIndex index.html index.htm
<Directory "/">
AllowOverride None
Require all granted
</Directory>
RemoteIPHeader X-Forwarded-For
ProxyRequests Off
ProxyPreserveHost On
<Proxy balancer://laravels>
BalancerMember http://192.168.1.1:5200 loadfactor=7
#BalancerMember http://192.168.1.2:5200 loadfactor=3
#BalancerMember http://192.168.1.3:5200 loadfactor=1 status=+H
ProxySet lbmethod=byrequests
</Proxy>
#ProxyPass / balancer://laravels/
#ProxyPassReverse / balancer://laravels/
# Apache handles the static resources, LaravelS handles the dynamic resource.
RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-d
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-f
RewriteRule ^/(.*)$ balancer://laravels%{REQUEST_URI} [P,L]
ErrorLog ${APACHE_LOG_DIR}/www.laravels.com.error.log
CustomLog ${APACHE_LOG_DIR}/www.laravels.com.access.log combined
</VirtualHost>
The Listening address of WebSocket Sever is the same as Http Server.
1.Create WebSocket Handler class, and implement interface WebSocketHandlerInterface
.The instant is automatically instantiated when start, you do not need to manually create it.
namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\Http\Response;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
/**
* @see https://www.swoole.co.uk/docs/modules/swoole-websocket-server
*/
class WebSocketService implements WebSocketHandlerInterface
{
// Declare constructor without parameters
public function __construct()
{
}
// public function onHandShake(Request $request, Response $response)
// {
// Custom handshake: https://www.swoole.co.uk/docs/modules/swoole-websocket-server-on-handshake
// The onOpen event will be triggered automatically after a successful handshake
// }
public function onOpen(Server $server, Request $request)
{
// Before the onOpen event is triggered, the HTTP request to establish the WebSocket has passed the Laravel route,
// so Laravel's Request, Auth information are readable, Session is readable and writable, but only in the onOpen event.
// \Log::info('New WebSocket connection', [$request->fd, request()->all(), session()->getId(), session('xxx'), session(['yyy' => time()])]);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$server->push($request->fd, 'Welcome to LaravelS');
}
public function onMessage(Server $server, Frame $frame)
{
// \Log::info('Received message', [$frame->fd, $frame->data, $frame->opcode, $frame->finish]);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$server->push($frame->fd, date('Y-m-d H:i:s'));
}
public function onClose(Server $server, $fd, $reactorId)
{
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
2.Modify config/laravels.php
.
// ...
'websocket' => [
'enable' => true, // Note: set enable to true
'handler' => \App\Services\WebSocketService::class,
],
'swoole' => [
//...
// Must set dispatch_mode in (2, 4, 5), see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
'dispatch_mode' => 2,
//...
],
// ...
3.Use SwooleTable
to bind FD & UserId, optional, Swoole Table Demo. Also you can use the other global storage services, like Redis/Memcached/MySQL, but be careful that FD will be possible conflicting between multiple Swoole Servers
.
4.Cooperate with Nginx (Recommended)
Refer WebSocket Proxy
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream swoole {
# Connect IP:Port
server 127.0.0.1:5200 weight=5 max_fails=3 fail_timeout=30s;
# Connect UnixSocket Stream file, tips: put the socket file in the /dev/shm directory to get better performance
#server unix:/yourpath/laravel-s-test/storage/laravels.sock weight=5 max_fails=3 fail_timeout=30s;
#server 192.168.1.1:5200 weight=3 max_fails=3 fail_timeout=30s;
#server 192.168.1.2:5200 backup;
keepalive 16;
}
server {
listen 80;
# Don't forget to bind the host
server_name laravels.com;
root /yourpath/laravel-s-test/public;
access_log /yourpath/log/nginx/$server_name.access.log main;
autoindex off;
index index.html index.htm;
# Nginx handles the static resources(recommend enabling gzip), LaravelS handles the dynamic resource.
location / {
try_files $uri @laravels;
}
# Response 404 directly when request the PHP file, to avoid exposing public/*.php
#location ~* \.php$ {
# return 404;
#}
# Http and WebSocket are concomitant, Nginx identifies them by "location"
# !!! The location of WebSocket is "/ws"
# Javascript: var ws = new WebSocket("ws://laravels.com/ws");
location =/ws {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout: Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds; At the same time, this close behavior is also affected by heartbeat setting of Swoole.
# proxy_read_timeout 60s;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://swoole;
}
location @laravels {
# proxy_connect_timeout 60s;
# proxy_send_timeout 60s;
# proxy_read_timeout 60s;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
proxy_pass http://swoole;
}
}
5.Heartbeat setting
Heartbeat setting of Swoole
// config/laravels.php
'swoole' => [
//...
// All connections are traversed every 60 seconds. If a connection does not send any data to the server within 600 seconds, the connection will be forced to close.
'heartbeat_idle_time' => 600,
'heartbeat_check_interval' => 60,
//...
],
Proxy read timeout of Nginx
# Nginx will close the connection if the proxied server does not send data to Nginx in 60 seconds
proxy_read_timeout 60s;
6.Push data in controller
namespace App\Http\Controllers;
class TestController extends Controller
{
public function push()
{
$fd = 1; // Find fd by userId from a map [userId=>fd].
/**@var \Swoole\WebSocket\Server $swoole */
$swoole = app('swoole');
$success = $swoole->push($fd, 'Push data to fd#1 in Controller');
var_dump($success);
}
}
Usually, you can reset/destroy some
global/static
variables, or change the currentRequest/Response
object.
laravels.received_request
After LaravelS parsed Swoole\Http\Request
to Illuminate\Http\Request
, before Laravel's Kernel handles this request.
// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen().
$events->listen('laravels.received_request', function (\Illuminate\Http\Request $req, $app) {
$req->query->set('get_key', 'hhxsv5');// Change query of request
$req->request->set('post_key', 'hhxsv5'); // Change post of request
});
laravels.generated_response
After Laravel's Kernel handled the request, before LaravelS parses Illuminate\Http\Response
to Swoole\Http\Response
.
// Edit file `app/Providers/EventServiceProvider.php`, add the following code into method `boot`
// If no variable $events, you can also call Facade \Event::listen().
$events->listen('laravels.generated_response', function (\Illuminate\Http\Request $req, \Symfony\Component\HttpFoundation\Response $rsp, $app) {
$rsp->headers->set('header-key', 'hhxsv5');// Change header of response
});
This feature depends on
AsyncTask
ofSwoole
, your need to setswoole.task_worker_num
inconfig/laravels.php
firstly. The performance of asynchronous event processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.
1.Create event class.
use Hhxsv5\LaravelS\Swoole\Task\Event;
class TestEvent extends Event
{
protected $listeners = [
// Listener list
TestListener1::class,
// TestListener2::class,
];
private $data;
public function __construct($data)
{
$this->data = $data;
}
public function getData()
{
return $this->data;
}
}
2.Create listener class.
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Task\Listener;
class TestListener1 extends Listener
{
/**
* @var TestEvent
*/
protected $event;
public function handle()
{
\Log::info(__CLASS__ . ':handle start', [$this->event->getData()]);
sleep(2);// Simulate the slow codes
// Deliver task in CronJob, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
3.Fire event.
// Create instance of event and fire it, "fire" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Event;
$event = new TestEvent('event data');
// $event->delay(10); // Delay 10 seconds to fire event
// $event->setTries(3); // When an error occurs, try 3 times in total
$success = Event::fire($event);
var_dump($success);// Return true if sucess, otherwise false
This feature depends on
AsyncTask
ofSwoole
, your need to setswoole.task_worker_num
inconfig/laravels.php
firstly. The performance of task processing is influenced by number of Swoole task process, you need to set task_worker_num appropriately.
1.Create task class.
use Hhxsv5\LaravelS\Swoole\Task\Task;
class TestTask extends Task
{
private $data;
private $result;
public function __construct($data)
{
$this->data = $data;
}
// The logic of task handling, run in task process, CAN NOT deliver task
public function handle()
{
\Log::info(__CLASS__ . ':handle start', [$this->data]);
sleep(2);// Simulate the slow codes
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
$this->result = 'the result of ' . $this->data;
}
// Optional, finish event, the logic of after task handling, run in worker process, CAN deliver task
public function finish()
{
\Log::info(__CLASS__ . ':finish start', [$this->result]);
Task::deliver(new TestTask2('task2 data')); // Deliver the other task
}
}
2.Deliver task.
// Create instance of TestTask and deliver it, "deliver" is asynchronous.
use Hhxsv5\LaravelS\Swoole\Task\Task;
$task = new TestTask('task data');
// $task->delay(3);// delay 3 seconds to deliver task
// $task->setTries(3); // When an error occurs, try 3 times in total
$ret = Task::deliver($task);
var_dump($ret);// Return true if sucess, otherwise false
Wrapper cron job base on Swoole's Millisecond Timer, replace
Linux
Crontab
.
1.Create cron job class.
namespace App\Jobs\Timer;
use App\Tasks\TestTask;
use Swoole\Coroutine;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Hhxsv5\LaravelS\Swoole\Timer\CronJob;
class TestCronJob extends CronJob
{
protected $i = 0;
// !!! The `interval` and `isImmediate` of cron job can be configured in two ways(pick one of two): one is to overload the corresponding method, and the other is to pass parameters when registering cron job.
// --- Override the corresponding method to return the configuration: begin
public function interval()
{
return 1000;// Run every 1000ms
}
public function isImmediate()
{
return false;// Whether to trigger `run` immediately after setting up
}
// --- Override the corresponding method to return the configuration: end
public function run()
{
\Log::info(__METHOD__, ['start', $this->i, microtime(true)]);
// do something
// sleep(1); // Swoole < 2.1
Coroutine::sleep(1); // Swoole>=2.1 Coroutine will be automatically created for run().
$this->i++;
\Log::info(__METHOD__, ['end', $this->i, microtime(true)]);
if ($this->i >= 10) { // Run 10 times only
\Log::info(__METHOD__, ['stop', $this->i, microtime(true)]);
$this->stop(); // Stop this cron job, but it will run again after restart/reload.
// Deliver task in CronJob, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
}
// The exceptions thrown here will be caught by the upper layer and recorded in the Swoole log. Developers need to try/catch manually.
}
}
2.Register cron job.
// Register cron jobs in file "config/laravels.php"
[
// ...
'timer' => [
'enable' => true, // Enable Timer
'jobs' => [ // The list of cron job
// Enable LaravelScheduleJob to run `php artisan schedule:run` every 1 minute, replace Linux Crontab
// \Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob::class,
// Two ways to configure parameters:
// [\App\Jobs\Timer\TestCronJob::class, [1000, true]], // Pass in parameters when registering
\App\Jobs\Timer\TestCronJob::class, // Override the corresponding method to return the configuration
],
'max_wait_time' => 5, // Max waiting time of reloading
// Enable the global lock to ensure that only one instance starts the timer when deploying multiple instances. This feature depends on Redis, please see https://laravel.com/docs/7.x/redis
'global_lock' => false,
'global_lock_key' => config('app.name', 'Laravel'),
],
// ...
];
3.Note: it will launch multiple timers when build the server cluster, so you need to make sure that launch one timer only to avoid running repetitive task.
4.LaravelS v3.4.0
starts to support the hot restart [Reload] Timer
process. After LaravelS receives the SIGUSR1
signal, it waits for max_wait_time
(default 5) seconds to end the process, then the Manager
process will pull up the Timer
process again.
5.If you only need to use minute-level
scheduled tasks, it is recommended to enable Hhxsv5\LaravelS\Illuminate\LaravelScheduleJob
instead of Linux Crontab, so that you can follow the coding habits of Laravel task scheduling and configure Kernel
.
// app/Console/Kernel.php
protected function schedule(Schedule $schedule)
{
// runInBackground() will start a new child process to execute the task. This is asynchronous and will not affect the execution timing of other tasks.
$schedule->command(TestCommand::class)->runInBackground()->everyMinute();
}
Via inotify
, support Linux only.
1.Install inotify extension.
2.Turn on the switch in Settings.
3.Notice: Modify the file only in Linux
to receive the file change events. It's recommended to use the latest Docker. Vagrant Solution.
Via fswatch
, support OS X/Linux/Windows.
1.Install fswatch.
2.Run command in your project root directory.
# Watch current directory
./bin/fswatch
# Watch app directory
./bin/fswatch ./app
Via inotifywait
, support Linux.
1.Install inotify-tools.
2.Run command in your project root directory.
# Watch current directory
./bin/inotify
# Watch app directory
./bin/inotify ./app
When the above methods does not work, the ultimate solution: set max_request=1,worker_num=1
, so that Worker
process will restart after processing a request. The performance of this method is very poor, so only development environment use
.
SwooleServer
in your project/**
* $swoole is the instance of `Swoole\WebSocket\Server` if enable WebSocket server, otherwise `Swoole\Http\Server`
* @var \Swoole\WebSocket\Server|\Swoole\Http\Server $swoole
*/
$swoole = app('swoole');
var_dump($swoole->stats());
$swoole->push($fd, 'Push WebSocket message');
SwooleTable
1.Define Table, support multiple.
All defined tables will be created before Swoole starting.
// in file "config/laravels.php"
[
// ...
'swoole_tables' => [
// Scene:bind UserId & FD in WebSocket
'ws' => [// The Key is table name, will add suffix "Table" to avoid naming conflicts. Here defined a table named "wsTable"
'size' => 102400,// The max size
'column' => [// Define the columns
['name' => 'value', 'type' => \Swoole\Table::TYPE_INT, 'size' => 8],
],
],
//...Define the other tables
],
// ...
];
2.Access Table
: all table instances will be bound on SwooleServer
, access by app('swoole')->xxxTable
.
namespace App\Services;
use Hhxsv5\LaravelS\Swoole\WebSocketHandlerInterface;
use Swoole\Http\Request;
use Swoole\WebSocket\Frame;
use Swoole\WebSocket\Server;
class WebSocketService implements WebSocketHandlerInterface
{
/**@var \Swoole\Table $wsTable */
private $wsTable;
public function __construct()
{
$this->wsTable = app('swoole')->wsTable;
}
// Scene:bind UserId & FD in WebSocket
public function onOpen(Server $server, Request $request)
{
// var_dump(app('swoole') === $server);// The same instance
/**
* Get the currently logged in user
* This feature requires that the path to establish a WebSocket connection go through middleware such as Authenticate.
* E.g:
* Browser side: var ws = new WebSocket("ws://127.0.0.1:5200/ws");
* Then the /ws route in Laravel needs to add the middleware like Authenticate.
* Route::get('/ws', function () {
* // Respond any content with status code 200
* return 'websocket';
* })->middleware(['auth']);
*/
// $user = Auth::user();
// $userId = $user ? $user->id : 0; // 0 means a guest user who is not logged in
$userId = mt_rand(1000, 10000);
// if (!$userId) {
// // Disconnect the connections of unlogged users
// $server->disconnect($request->fd);
// return;
// }
$this->wsTable->set('uid:' . $userId, ['value' => $request->fd]);// Bind map uid to fd
$this->wsTable->set('fd:' . $request->fd, ['value' => $userId]);// Bind map fd to uid
$server->push($request->fd, "Welcome to LaravelS #{$request->fd}");
}
public function onMessage(Server $server, Frame $frame)
{
// Broadcast
foreach ($this->wsTable as $key => $row) {
if (strpos($key, 'uid:') === 0 && $server->isEstablished($row['value'])) {
$content = sprintf('Broadcast: new message "%s" from #%d', $frame->data, $frame->fd);
$server->push($row['value'], $content);
}
}
}
public function onClose(Server $server, $fd, $reactorId)
{
$uid = $this->wsTable->get('fd:' . $fd);
if ($uid !== false) {
$this->wsTable->del('uid:' . $uid['value']); // Unbind uid map
}
$this->wsTable->del('fd:' . $fd);// Unbind fd map
$server->push($fd, "Goodbye #{$fd}");
}
}
For more information, please refer to Swoole Server AddListener
To make our main server support more protocols not just Http and WebSocket, we bring the feature multi-port mixed protocol
of Swoole in LaravelS and name it Socket
. Now, you can build TCP/UDP
applications easily on top of Laravel.
Create Socket
handler class, and extend Hhxsv5\LaravelS\Swoole\Socket\{TcpSocket|UdpSocket|Http|WebSocket}
.
namespace App\Sockets;
use Hhxsv5\LaravelS\Swoole\Socket\TcpSocket;
use Swoole\Server;
class TestTcpSocket extends TcpSocket
{
public function onConnect(Server $server, $fd, $reactorId)
{
\Log::info('New TCP connection', [$fd]);
$server->send($fd, 'Welcome to LaravelS.');
}
public function onReceive(Server $server, $fd, $reactorId, $data)
{
\Log::info('Received data', [$fd, $data]);
$server->send($fd, 'LaravelS: ' . $data);
if ($data === "quit\r\n") {
$server->send($fd, 'LaravelS: bye' . PHP_EOL);
$server->close($fd);
}
}
public function onClose(Server $server, $fd, $reactorId)
{
\Log::info('Close TCP connection', [$fd]);
$server->send($fd, 'Goodbye');
}
}
These Socket
connections share the same worker processes with your HTTP
/WebSocket
connections. So it won't be a problem at all if you want to deliver tasks, use SwooleTable
, even Laravel components such as DB, Eloquent and so on. At the same time, you can access Swoole\Server\Port
object directly by member property swoolePort
.
public function onReceive(Server $server, $fd, $reactorId, $data)
{
$port = $this->swoolePort; // Get the `Swoole\Server\Port` object
}
namespace App\Http\Controllers;
class TestController extends Controller
{
public function test()
{
/**@var \Swoole\Http\Server|\Swoole\WebSocket\Server $swoole */
$swoole = app('swoole');
// $swoole->ports: Traverse all Port objects, https://www.swoole.co.uk/docs/modules/swoole-server/multiple-ports
$port = $swoole->ports[0]; // Get the `Swoole\Server\Port` object, $port[0] is the port of the main server
foreach ($port->connections as $fd) { // Traverse all connections
// $swoole->send($fd, 'Send tcp message');
// if($swoole->isEstablished($fd)) {
// $swoole->push($fd, 'Send websocket message');
// }
}
}
}
Register Sockets.
// Edit `config/laravels.php`
//...
'sockets' => [
[
'host' => '127.0.0.1',
'port' => 5291,
'type' => SWOOLE_SOCK_TCP,// Socket type: SWOOLE_SOCK_TCP/SWOOLE_SOCK_TCP6/SWOOLE_SOCK_UDP/SWOOLE_SOCK_UDP6/SWOOLE_UNIX_DGRAM/SWOOLE_UNIX_STREAM
'settings' => [// Swoole settings:https://www.swoole.co.uk/docs/modules/swoole-server-methods#swoole_server-addlistener
'open_eof_check' => true,
'package_eof' => "\r\n",
],
'handler' => \App\Sockets\TestTcpSocket::class,
'enable' => true, // whether to enable, default true
],
],
About the heartbeat configuration, it can only be set on the main server
and cannot be configured on Socket
, but the Socket
inherits the heartbeat configuration of the main server
.
For TCP socket, onConnect
and onClose
events will be blocked when dispatch_mode
of Swoole is 1/3
, so if you want to unblock these two events please set dispatch_mode
to 2/4/5
.
'swoole' => [
//...
'dispatch_mode' => 2,
//...
];
Test.
TCP: telnet 127.0.0.1 5291
UDP: [Linux] echo "Hello LaravelS" > /dev/udp/127.0.0.1/5292
Register example of other protocols.
turn on WebSocket
, that is, set websocket.enable
to true
.Warning: The order of code execution in the coroutine is out of order. The data of the request level should be isolated by the coroutine ID. However, there are many singleton and static attributes in Laravel/Lumen, the data between different requests will affect each other, it's Unsafe
. For example, the database connection is a singleton, the same database connection shares the same PDO resource. This is fine in the synchronous blocking mode, but it does not work in the asynchronous coroutine mode. Each query needs to create different connections and maintain IO state of different connections, which requires a connection pool.
DO NOT
enable the coroutine, only the custom process can use the coroutine.
Support developers to create special work processes for monitoring, reporting, or other special tasks. Refer addProcess.
Create Proccess class, implements CustomProcessInterface.
namespace App\Processes;
use App\Tasks\TestTask;
use Hhxsv5\LaravelS\Swoole\Process\CustomProcessInterface;
use Hhxsv5\LaravelS\Swoole\Task\Task;
use Swoole\Coroutine;
use Swoole\Http\Server;
use Swoole\Process;
class TestProcess implements CustomProcessInterface
{
/**
* @var bool Quit tag for Reload updates
*/
private static $quit = false;
public static function callback(Server $swoole, Process $process)
{
// The callback method cannot exit. Once exited, Manager process will automatically create the process
while (!self::$quit) {
\Log::info('Test process: running');
// sleep(1); // Swoole < 2.1
Coroutine::sleep(1); // Swoole>=2.1: Coroutine & Runtime will be automatically enabled for callback().
// Deliver task in custom process, but NOT support callback finish() of task.
// Note: Modify task_ipc_mode to 1 or 2 in config/laravels.php, see https://www.swoole.co.uk/docs/modules/swoole-server/configuration
$ret = Task::deliver(new TestTask('task data'));
var_dump($ret);
// The upper layer will catch the exception thrown in the callback and record it in the Swoole log, and then this process will exit. The Manager process will re-create the process after 3 seconds, so developers need to try/catch to catch the exception by themselves to avoid frequent process creation.
// throw new \Exception('an exception');
}
}
// Requirements: LaravelS >= v3.4.0 & callback() must be async non-blocking program.
public static function onReload(Server $swoole, Process $process)
{
// Stop the process...
// Then end process
\Log::info('Test process: reloading');
self::$quit = true;
// $process->exit(0); // Force exit process
}
// Requirements: LaravelS >= v3.7.4 & callback() must be async non-blocking program.
public static function onStop(Server $swoole, Process $process)
{
// Stop the process...
// Then end process
\Log::info('Test process: stopping');
self::$quit = true;
// $process->exit(0); // Force exit process
}
}
Register TestProcess.
// Edit `config/laravels.php`
// ...
'processes' => [
'test' => [ // Key name is process name
'class' => \App\Processes\TestProcess::class,
'redirect' => false, // Whether redirect stdin/stdout, true or false
'pipe' => 0, // The type of pipeline, 0: no pipeline 1: SOCK_STREAM 2: SOCK_DGRAM
'enable' => true, // Whether to enable, default true
//'num' => 3 // To create multiple processes of this class, default is 1
//'queue' => [ // Enable message queue as inter-process communication, configure empty array means use default parameters
// 'msg_key' => 0, // The key of the message queue. Default: ftok(__FILE__, 1).
// 'mode' => 2, // Communication mode, default is 2, which means contention mode
// 'capacity' => 8192, // The length of a single message, is limited by the operating system kernel parameters. The default is 8192, and the maximum is 65536
//],
//'restart_interval' => 5, // After the process exits abnormally, how many seconds to wait before restarting the process, default 5 seconds
],
],
Note: The callback() cannot quit. If quit, the Manager process will re-create the process.
Example: Write data to a custom process.
// config/laravels.php
'processes' => [
'test' => [
'class' => \App\Processes\TestProcess::class,
'redirect' => false,
'pipe' => 1,
],
],
// app/Processes/TestProcess.php
public static function callback(Server $swoole, Process $process)
{
while ($data = $process->read()) {
\Log::info('TestProcess: read data', [$data]);
$process->write('TestProcess: ' . $data);
}
}
// app/Http/Controllers/TestController.php
public function testProcessWrite()
{
/**@var \Swoole\Process $process */
$process = app('swoole')->customProcesses['test'];
$process->write('TestController: write data' . time());
var_dump($process->read());
}
LaravelS
will pull theApollo
configuration and write it to the.env
file when starting. At the same time,LaravelS
will start the custom processapollo
to monitor the configuration and automaticallyreload
when the configuration changes.
Enable Apollo: add --enable-apollo
and Apollo parameters to the startup parameters.
php bin/laravels start --enable-apollo --apollo-server=http://127.0.0.1:8080 --apollo-app-id=LARAVEL-S-TEST
Support hot updates(optional).
// Edit `config/laravels.php`
'processes' => Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),
// When there are other custom process configurations
'processes' => [
'test' => [
'class' => \App\Processes\TestProcess::class,
'redirect' => false,
'pipe' => 1,
],
// ...
] + Hhxsv5\LaravelS\Components\Apollo\Process::getDefinition(),
List of available parameters.
Parameter | Description | Default | Demo |
---|---|---|---|
apollo-server | Apollo server URL | - | --apollo-server=http://127.0.0.1:8080 |
apollo-app-id | Apollo APP ID | - | --apollo-app-id=LARAVEL-S-TEST |
apollo-namespaces | The namespace to which the APP belongs, support specify the multiple | application | --apollo-namespaces=application --apollo-namespaces=env |
apollo-cluster | The cluster to which the APP belongs | default | --apollo-cluster=default |
apollo-client-ip | IP of current instance, can also be used for grayscale publishing | Local intranet IP | --apollo-client-ip=10.2.1.83 |
apollo-pull-timeout | Timeout time(seconds) when pulling configuration | 5 | --apollo-pull-timeout=5 |
apollo-backup-old-env | Whether to backup the old configuration file when updating the configuration file .env | false | --apollo-backup-old-env |
Support Prometheus monitoring and alarm, Grafana visually view monitoring metrics. Please refer to Docker Compose for the environment construction of Prometheus and Grafana.
Require extension APCu >= 5.0.0, please install it by pecl install apcu
.
Copy the configuration file prometheus.php
to the config
directory of your project. Modify the configuration as appropriate.
# Execute commands in the project root directory
cp vendor/hhxsv5/laravel-s/config/prometheus.php config/
If your project is Lumen
, you also need to manually load the configuration $app->configure('prometheus');
in bootstrap/app.php
.
Configure global
middleware: Hhxsv5\LaravelS\Components\Prometheus\RequestMiddleware::class
. In order to count the request time consumption as accurately as possible, RequestMiddleware
must be the first
global middleware, which needs to be placed in front of other middleware.
Register ServiceProvider: Hhxsv5\LaravelS\Components\Prometheus\ServiceProvider::class
.
Configure the CollectorProcess in config/laravels.php
to collect the metrics of Swoole Worker/Task/Timer processes regularly.
'processes' => Hhxsv5\LaravelS\Components\Prometheus\CollectorProcess::getDefinition(),
Create the route to output metrics.
use Hhxsv5\LaravelS\Components\Prometheus\Exporter;
Route::get('/actuator/prometheus', function () {
$result = app(Exporter::class)->render();
return response($result, 200, ['Content-Type' => Exporter::REDNER_MIME_TYPE]);
});
Complete the configuration of Prometheus and start it.
global:
scrape_interval: 5s
scrape_timeout: 5s
evaluation_interval: 30s
scrape_configs:
- job_name: laravel-s-test
honor_timestamps: true
metrics_path: /actuator/prometheus
scheme: http
follow_redirects: true
static_configs:
- targets:
- 127.0.0.1:5200 # The ip and port of the monitored service
# Dynamically discovered using one of the supported service-discovery mechanisms
# https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
# - job_name: laravels-eureka
# honor_timestamps: true
# scrape_interval: 5s
# metrics_path: /actuator/prometheus
# scheme: http
# follow_redirects: true
# eureka_sd_configs:
# - server: http://127.0.0.1:8080/eureka
# follow_redirects: true
# refresh_interval: 5s
Start Grafana, then import panel json.
Supported events:
Event | Interface | When happened |
---|---|---|
ServerStart | Hhxsv5\LaravelS\Swoole\Events\ServerStartInterface | Occurs when the Master process is starting, this event should not handle complex business logic, and can only do some simple work of initialization . |
ServerStop | Hhxsv5\LaravelS\Swoole\Events\ServerStopInterface | Occurs when the server exits normally, CANNOT use async or coroutine related APIs in this event . |
WorkerStart | Hhxsv5\LaravelS\Swoole\Events\WorkerStartInterface | Occurs after the Worker/Task process is started, and the Laravel initialization has been completed. |
WorkerStop | Hhxsv5\LaravelS\Swoole\Events\WorkerStopInterface | Occurs after the Worker/Task process exits normally |
WorkerError | Hhxsv5\LaravelS\Swoole\Events\WorkerErrorInterface | Occurs when an exception or fatal error occurs in the Worker/Task process |
1.Create an event class to implement the corresponding interface.
namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\ServerStartInterface;
use Swoole\Atomic;
use Swoole\Http\Server;
class ServerStartEvent implements ServerStartInterface
{
public function __construct()
{
}
public function handle(Server $server)
{
// Initialize a global counter (available across processes)
$server->atomicCount = new Atomic(2233);
// Invoked in controller: app('swoole')->atomicCount->get();
}
}
namespace App\Events;
use Hhxsv5\LaravelS\Swoole\Events\WorkerStartInterface;
use Swoole\Http\Server;
class WorkerStartEvent implements WorkerStartInterface
{
public function __construct()
{
}
public function handle(Server $server, $workerId)
{
// Initialize a database connection pool
// DatabaseConnectionPool::init();
}
}
2.Configuration.
// Edit `config/laravels.php`
'event_handlers' => [
'ServerStart' => [\App\Events\ServerStartEvent::class], // Trigger events in array order
'WorkerStart' => [\App\Events\WorkerStartEvent::class],
],
1.Modify bootstrap/app.php
and set the storage directory. Because the project directory is read-only, the /tmp
directory can only be read and written.
$app->useStoragePath(env('APP_STORAGE_PATH', '/tmp/storage'));
2.Create a shell script laravels_bootstrap
and grant executable permission
.
#!/usr/bin/env bash
set +e
# Create storage-related directories
mkdir -p /tmp/storage/app/public
mkdir -p /tmp/storage/framework/cache
mkdir -p /tmp/storage/framework/sessions
mkdir -p /tmp/storage/framework/testing
mkdir -p /tmp/storage/framework/views
mkdir -p /tmp/storage/logs
# Set the environment variable APP_STORAGE_PATH, please make sure it's the same as APP_STORAGE_PATH in .env
export APP_STORAGE_PATH=/tmp/storage
# Start LaravelS
php bin/laravels start
3.Configure template.xml
.
ROSTemplateFormatVersion: '2015-09-01'
Transform: 'Aliyun::Serverless-2018-04-03'
Resources:
laravel-s-demo:
Type: 'Aliyun::Serverless::Service'
Properties:
Description: 'LaravelS Demo for Serverless'
fc-laravel-s:
Type: 'Aliyun::Serverless::Function'
Properties:
Handler: laravels.handler
Runtime: custom
MemorySize: 512
Timeout: 30
CodeUri: ./
InstanceConcurrency: 10
EnvironmentVariables:
BOOTSTRAP_FILE: laravels_bootstrap
Under FPM mode, singleton instances will be instantiated and recycled in every request, request start=>instantiate instance=>request end=>recycled instance.
Under Swoole Server, All singleton instances will be held in memory, different lifetime from FPM, request start=>instantiate instance=>request end=>do not recycle singleton instance. So need developer to maintain status of singleton instances in every request.
Common solutions:
Write a XxxCleaner
class to clean up the singleton object state. This class implements the interface Hhxsv5\LaravelS\Illuminate\Cleaners\CleanerInterface
and then registers it in cleaners
of laravels.php
.
Reset
status of singleton instances by Middleware
.
Re-register ServiceProvider
, add XxxServiceProvider
into register_providers
of file laravels.php
. So that reinitialize singleton instances in every request Refer.
Known issues: a package of known issues and solutions.
Logging; if you want to output to the console, you can use stderr
, Log::channel('stderr')->debug('debug message').
Laravel Dump Server(Laravel 5.7 has been integrated by default).
Read request by Illuminate\Http\Request
Object, $_ENV is readable, $_SERVER is partially readable, CANNOT USE
$_GET/$_POST/$_FILES/$_COOKIE/$_REQUEST/$_SESSION/$GLOBALS.
public function form(\Illuminate\Http\Request $request)
{
$name = $request->input('name');
$all = $request->all();
$sessionId = $request->cookie('sessionId');
$photo = $request->file('photo');
// Call getContent() to get the raw POST body, instead of file_get_contents('php://input')
$rawContent = $request->getContent();
//...
}
Respond by Illuminate\Http\Response
Object, compatible with echo/vardump()/print_r(),CANNOT USE
functions dd()/exit()/die()/header()/setcookie()/http_response_code().
public function json()
{
return response()->json(['time' => time()])->header('header1', 'value1')->withCookie('c1', 'v1');
}
Singleton connection
will be resident in memory, it is recommended to turn on persistent connection
for better performance.
will
reconnect automatically immediately
after disconnect.// config/database.php
'connections' => [
'my_conn' => [
'driver' => 'mysql',
'host' => env('DB_MY_CONN_HOST', 'localhost'),
'port' => env('DB_MY_CONN_PORT', 3306),
'database' => env('DB_MY_CONN_DATABASE', 'forge'),
'username' => env('DB_MY_CONN_USERNAME', 'forge'),
'password' => env('DB_MY_CONN_PASSWORD', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'strict' => false,
'options' => [
// Enable persistent connection
\PDO::ATTR_PERSISTENT => true,
],
],
],
won't
reconnect automatically immediately
after disconnect, and will throw an exception about lost connection, reconnect next time. You need to make sure that SELECT DB
correctly before operating Redis every time.// config/database.php
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'), // It is recommended to use phpredis for better performance.
'default' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
'persistent' => true, // Enable persistent connection
],
],
Avoid using global variables. If necessary, please clean or reset them manually.
Infinitely appending element into static
/global
variable will lead to OOM(Out of Memory).
class Test
{
public static $array = [];
public static $string = '';
}
// Controller
public function test(Request $req)
{
// Out of Memory
Test::$array[] = $req->input('param1');
Test::$string .= $req->input('param2');
}
Memory leak detection method
Modify config/laravels.php
: worker_num=1, max_request=1000000
, remember to change it back after test;
Add routing /debug-memory-leak
without route middleware
to observe the memory changes of the Worker
process;
Start LaravelS
and request /debug-memory-leak
until diff_mem
is less than or equal to zero; if diff_mem
is always greater than zero, it means that there may be a memory leak in Global Middleware
or Laravel Framework
;
After completing Step 3
, alternately
request the business routes and /debug-memory-leak
(It is recommended to use ab
/wrk
to make a large number of requests for business routes), the initial increase in memory is normal. After a large number of requests for the business routes, if diff_mem
is always greater than zero and curr_mem
continues to increase, there is a high probability of memory leak; If curr_mem
always changes within a certain range and does not continue to increase, there is a low probability of memory leak.
If you still can't solve it, max_request is the last guarantee.
Author: hhxsv5
Source Code: https://github.com/hhxsv5/laravel-s
License: MIT License
1625213126
Linux server security is on sufficient level from the moment you install the OS. And that’s great to know because… hackers never sleep! They’re kind of like digital vandals. Taking pleasure – and sometimes money too – as they inflict misery on random strangers all over the planet.
Anyone who looks after their own server appreciates the fact that Linux is highly secure right out the box. Naturally, it isn’t completely watertight. But it does do a better job of keeping you safe than most other operating systems. Still, there are plenty of ways you can improve it further. So here are some practical ways how you can keep the evil hordes from the gates. It will probably help if you’ve tinkered under the hood of a web server before. But don’t think that you have to be a tech guru or anything like that.
#linux #linux server security #server security
1596789120
Everything around us has become smart, like smart infrastructures, smart cities, autonomous vehicles, to name a few. The innovation of smart devices makes it possible to achieve these heights in science and technology. But, data is vulnerable, there is a risk of attack by cybercriminals. To get started, let’s know about IoT devices.
The Internet Of Things(IoT) is a system that interrelates computer devices like sensors, software, and actuators, digital machines, etc. They are linked together with particular objects that work through the internet and transfer data over devices without humans interference.
Famous examples are Amazon Alexa, Apple SIRI, Interconnected baby monitors, video doorbells, and smart thermostats.
When technologies grow and evolve, risks are also on the high stakes. Ransomware attacks are on the continuous increase; securing data has become the top priority.
When you think your smart home won’t fudge a thing against cybercriminals, you should also know that they are vulnerable. When cybercriminals access our smart voice speakers like Amazon Alexa or Apple Siri, it becomes easy for them to steal your data.
Cybersecurity report 2020 says popular hacking forums expose 770 million email addresses and 21 million unique passwords, 620 million accounts have been compromised from 16 hacked websites.
The attacks are likely to increase every year. To help you secure your data of IoT devices, here are some best tips you can implement.
Your router has the default name of make and model. When we stick with the manufacturer name, attackers can quickly identify our make and model. So give the router name different from your addresses, without giving away personal information.
If your devices are connected to the internet, these connections are vulnerable to cyber attacks when your devices don’t have the proper security. Almost every web interface is equipped with multiple devices, so it’s hard to track the device. But, it’s crucial to stay aware of them.
When we use the default usernames and passwords, it is attackable. Because the cybercriminals possibly know the default passwords come with IoT devices. So use strong passwords to access our IoT devices.
Use strong or unique passwords that are easily assumed, such as ‘123456’ or ‘password1234’ to protect your accounts. Give strong and complex passwords formed by combinations of alphabets, numeric, and not easily bypassed symbols.
Also, change passwords for multiple accounts and change them regularly to avoid attacks. We can also set several attempts to wrong passwords to set locking the account to safeguard from the hackers.
Are you try to keep an eye on your IoT devices through your mobile devices in different locations. I recommend you not to use the public WI-FI network to access them. Because they are easily accessible through for everyone, you are still in a hurry to access, use VPN that gives them protection against cyber-attacks, giving them privacy and security features, for example, using Express VPN.
There are software and firewalls like intrusion detection system/intrusion prevention system in the market. This will be useful to screen and analyze the wire traffic of a network. You can identify the security weakness by the firewall scanners within the network structure. Use these firewalls to get rid of unwanted security issues and vulnerabilities.
Every smart device comes with the insecure default settings, and sometimes we are not able to change these default settings configurations. These conditions need to be assessed and need to reconfigure the default settings.
Nowadays, every smart app offers authentication to secure the accounts. There are many types of authentication methods like single-factor authentication, two-step authentication, and multi-factor authentication. Use any one of these to send a one time password (OTP) to verify the user who logs in the smart device to keep our accounts from falling into the wrong hands.
Every smart device manufacturer releases updates to fix bugs in their software. These security patches help us to improve our protection of the device. Also, update the software on the smartphone, which we are used to monitoring the IoT devices to avoid vulnerabilities.
When we connect the smart home to the smartphone and control them via smartphone, you need to keep them safe. If you miss the phone almost, every personal information is at risk to the cybercriminals. But sometimes it happens by accident, makes sure that you can clear all the data remotely.
However, securing smart devices is essential in the world of data. There are still cybercriminals bypassing the securities. So make sure to do the safety measures to avoid our accounts falling out into the wrong hands. I hope these steps will help you all to secure your IoT devices.
If you have any, feel free to share them in the comments! I’d love to know them.
Are you looking for more? Subscribe to weekly newsletters that can help your stay updated IoT application developments.
#iot #enterprise iot security #how iot can be used to enhance security #how to improve iot security #how to protect iot devices from hackers #how to secure iot devices #iot security #iot security devices #iot security offerings #iot security technologies iot security plus #iot vulnerable devices #risk based iot security program