1673652360
A load balancer is a device that acts as a reverse proxy and distributes the network across a number of servers. It is used to increase the capacity and reliability of applications. Load balancer helps in dividing the user requests among multiple servers, which results in a better user experience.
It works upon data found in network and transport layer protocols like IP, TCP, FTP, UDP, etc.
It distributes the requests based on data found in application layer protocols such as HTTP.
Some industry algorithms that are followed by load balancers are:
F5 BIG-IP is similar to a Load balancer which allows you to inspect and encrypt all the traffic passing through your network. It manages and balances the traffic load across a group or cloud of physical host servers by creating a virtual server with a single virtual IP address.
The configuration file of F5 BIG-IP contains all the information of the nodes, pool list, IPs, virtual server details, etc. So it’s very important to take the backup of this configuration. We can take a backup of configuration files of F5 BIG-IP by two methods:-
Log in to the virtual server of your F5 BIG-IP and open the command line tool. Navigate yourself to the UCS folder which is located in the var/local/ucs as shown below. The UCS file contains all the files you need to restore your current configuration to a new or existing system.
The system displays the content of the directory. Give the name and save the UCS archive file in the default directory(as shown above). Type the command as shown below:-
This way we can create the backup and keep that file in another server or S3, where you would like to store it.
Log in to the Configuration utility
Click on systems >> Archives. Then archive page displays. Then select create and enter the name (Make sure the name should be unique) as shown below:-
The operation status page will appear and then click on OK. As shown below:-
Then it will display the file which you have created and Copy the .ucs file to another system. For more information, you can also visit the official documentation.
Original article source at: https://blog.knoldus.com/
1671716040
Hello everyone! Today in this blog, we will learn how to backup and restore Elasticsearch using snapshots. Before diving in, let’s first brush up on the basics of the topic.
You should be seeing the following output –
As now we have successfully taken the backup of our indices, let us just make sure if we’re able to retrieve the data if it gets lost. So, let us first delete our data using the following command –
Now, if you’ll check, all the data must have been gone. So, let us try to restore our data using the snapshots we created.
The above command will successfully restore all the lost or deleted data.
That’s it for now. I hope this article was useful to you. Please feel free to drop any comments, questions, or suggestions.
Original article source at: https://blog.knoldus.com/
1668153960
RsyncOSX and RsyncUI are GUI´s on the Apple macOS plattform for the command line tool rsync.
It is rsync
which executes the synchronize task. The GUI´s are only for setting parameters and make it more easy to use rsync
, which is a fantastic tool.
The UI of RsyncOSX and RsyncUI can for users who dont know rsync
be difficult to understand. Setting wrong parameters to rsync can result in deleted data. RsyncOSX nor RsyncUI will not stop you for doing so. That is why it is very important to execute a simulated run, a --dry-run
, and verify the result before the real run.
If you have installed macOS Big Sur, RsyncOSX is the GUI for you. If you have installed macOS Monterey or macOS Ventura, you can use both GUI´s in parallell.
Please be aware it is an external task not controlled by RsyncOSX which executes the command line tool rsync
. RsyncOSX is monitoring the task for progress and termination. The user can abort a tasks at any time. Please let the abort to finish and cleanup properly before starting a new task. It might take a few seconds. If not the apps might become unresponsive.
One of many advantages of utilizing rsync
is that it can restart and continue the synchronize task from where it was aborted.
RsyncOSX is the only GUI which supports scheduling of task.
RsyncOSX is released for macOS Big Sur and later due to requirements in some features of Combine. Latest build is 8 September 2022.
RsyncUI is released for macOS Monterey and later.
Latest build is 5 November 2022.
Author: rsyncOSX
Source Code: https://github.com/rsyncOSX/RsyncOSX
License: MIT license
1665752940
A modern backup solution for Laravel apps
This Laravel package creates a backup of your application. The backup is a zip file that contains all files in the directories you specify along with a dump of your database. The backup can be stored on any of the filesystems you have configured in Laravel.
Feeling paranoid about backups? No problem! You can backup your application to multiple filesystems at once.
Once installed taking a backup of your files and databases is very easy. Just issue this artisan command:
php artisan backup:run
But we didn't stop there. The package also provides a backup monitor to check the health of your backups. You can be notified via several channels when a problem with one of your backups is found. To avoid using excessive disk space, the package can also clean up old backups.
This package requires PHP 8.0 and Laravel 8.0 or higher. You'll find installation instructions and full documentation on https://spatie.be/docs/laravel-backup.
If you are on a PHP version below 8.0 or a Laravel version below 8.0 just use an older version of this package.
Read the extensive documentation on version 3, on version 4, on version 5 and on version 6. We won't introduce new features to v6 and below anymore but we will still fix bugs.
Run the tests with:
composer test
Please see CHANGELOG for more information on what has changed recently.
Please see CONTRIBUTING for details.
If you discover any security-related issues, please email security@spatie.be instead of using the issue tracker.
You're free to use this package, but if it makes it to your production environment we highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using.
Our address is: Spatie, Kruikstraat 22, 2018 Antwerp, Belgium.
We publish all received postcards on our company website.
And a special thanks to Caneco for the logo ✨
Author: Spatie
Source Code: https://github.com/spatie/laravel-backup
License: MIT license
1665110640
Nextcloud All In One
Nextcloud AIO stands for Nextcloud All In One and provides easy deployment and maintenance with most features included in this one Nextcloud instance.
Included are:
The following instructions are especially meant for Linux. For macOS see this, for Windows see this.
Install Docker on your Linux installation using:
curl -fsSL get.docker.com | sudo sh
If you need ipv6 support, you should enable it by following https://docs.docker.com/config/daemon/ipv6/.
Run the command below in order to start the container:
(For people that cannot use ports 80 and/or 443 on this server, please follow the reverse proxy documentation because port 443 is used by this project and opened on the host by default even though it does not look like this is the case. Otherwise please run the command below!)
# For x64 CPUs:
sudo docker run \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 80:80 \
--publish 8080:8080 \
--publish 8443:8443 \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest
Command for arm64 CPUs like the Raspberry Pi 4
# For arm64 CPUs:
sudo docker run \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 80:80 \
--publish 8080:8080 \
--publish 8443:8443 \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest-arm64
After the initial startup, you should be able to open the Nextcloud AIO Interface now on port 8080 of this server.
E.g. https://ip.address.of.this.server:8080
If your firewall/router has port 80 and 8443 open and you point a domain to your server, you can get a valid certificate automatically by opening the Nextcloud AIO Interface via:https://your-domain-that-points-to-this-server.tld:8443
Please do not forget to open port 3478/TCP
and 3478/UDP
in your firewall/router for the Talk container!
Nextcloud AIO is inspired by projects like Portainer that manage the docker daemon by talking to it through the docker socket directly. This concept allows a user to install only one container with a single command that does the heavy lifting of creating and managing all containers that are needed in order to provide a Nextcloud installation with most features included. It also makes updating a breeze and is not bound to the host system (and its slow updates) anymore as everything is in containers. Additionally, it is very easy to handle from a user perspective because a simple interface for managing your Nextcloud AIO installation is provided.
Yes. Please refer to the following documentation on this: reverse-proxy.md
Only those (if you access the Mastercontainer Interface internally via port 8080):
443/TCP
for the Apache container3478/TCP
and 3478/UDP
for the Talk container8080/TCP
: Mastercontainer Interface with self-signed certificate (works always, also if only access via IP-address is possible, e.g. https://ip.address.of.this.server:8080/
)80/TCP
: redirects to Nextcloud (is used for getting the certificate via ACME http-challenge for the Mastercontainer)8443/TCP
: Mastercontainer Interface with valid certificate (only works if port 80 and 8443 are open in your firewall/router and you point a domain to your server. It generates a valid certificate then automatically and access via e.g. https://public.domain.com:8443/
is possible.)443/TCP
: will be used by the Apache container later on and needs to be open in your firewall/router3478/TCP
and 3478/UDP
: will be used by the Turnserver inside the Talk container and needs to be open in your firewall/routerOn macOS, there are two things different in comparison to Linux: instead of using --volume /var/run/docker.sock:/var/run/docker.sock:ro
, you need to use --volume /var/run/docker.sock.raw:/var/run/docker.sock:ro
to run it after you installed Docker Desktop. You also need to add -e DOCKER_SOCKET_PATH="/var/run/docker.sock.raw"
to the startup command. Apart from that it should work and behave the same like on Linux.
On Windows, the following command should work in the command prompt after you installed Docker Desktop:
docker run ^
--sig-proxy=false ^
--name nextcloud-aio-mastercontainer ^
--restart always ^
--publish 80:80 ^
--publish 8080:8080 ^
--publish 8443:8443 ^
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config ^
--volume //var/run/docker.sock:/var/run/docker.sock:ro ^
nextcloud/all-in-one:latest
Please note: In order to make the built-in backup solution able to back up to the host system, you need to create a volume with the name nextcloud_aio_backupdir
beforehand:
docker volume create ^
--driver local ^
--name nextcloud_aio_backupdir ^
-o device="/host_mnt/c/your/backup/path" ^
-o type="none" ^
-o o="bind"
(The value /host_mnt/c/your/backup/path
in this example would be equivalent to C:\your\backup\path
on the Windows host. So you need to translate the path that you want to use into the correct format.) ⚠️️ Attention: Make sure that the path exists on the host before you create the volume! Otherwise everything will bug out!
Also, you may be interested in adjusting Nextcloud's Datadir to store the files on the host system. See this documentation on how to do it.
The easiest way to run it with Portainer on Linux is to use Portainer's stacks feature and use this docker-compose file in order to start AIO correctly.
Although it does not seems like it is the case but from AIO perspective a Cloudflare Argo Tunnel works like a reverse proxy. So please follow the reverse proxy documentation where is documented how to make it run behind a Cloudflare Argo Tunnel.
You can install AIO in reverse proxy mode where is also documented how to get it running using the ACME DNS-challenge for getting a valid certificate for AIO. See the reverse proxy documentation. (Meant is the Caddy with ACME DNS-challenge
section).
If you do not want to open Nextcloud to the public internet, you may have a look at the following documentation how to set it up locally: local-instance.md
No and they will not be. If you want to run it locally, without opening Nextcloud to the public internet, please have a look at the local instance documentation.
No and it will not be added. If you only want to run it locally, you may have a look at the following documentation: local-instance.md
No and they will not be. Please use a dedicated domain for Nextcloud and set it up correctly by following the reverse proxy documentation. If port 443 and/or 80 is blocked for you, you may use the ACME DNS-challenge or a Cloudflare Argo Tunnel.
No and it will not be added. Please use a dedicated domain for Nextcloud and set it up correctly by following the reverse proxy documentation.
The recommended way is to set up a local dns-server like a pi-hole and set up a custom dns-record for that domain that points to the internal ip-adddress of your server that runs Nextcloud AIO.
If you are completely sure that you've configured everything correctly and are not able to pass the domain validation, you may skip the domain validation by adding -e SKIP_DOMAIN_VALIDATION=true
to the docker run command of the mastercontainer.
It is known that Linux distros that use firewalld as their firewall daemon have problems with docker networks. In case the containers are not able to communicate with each other, you may change your firewalld to use the iptables backend by running:
sudo sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/g' /etc/firewalld/firewalld.conf
sudo systemctl restart firewalld docker
Afterwards it should work.
See https://dev.to/ozorest/fedora-32-how-to-solve-docker-internal-network-issue-22me for more details on this. This limitation is even mentioned on the official firewalld website: https://firewalld.org/#who-is-using-it
occ
commands?Simply run the following: sudo docker exec -it nextcloud-aio-nextcloud php occ your-command
. Of course your-command
needs to be exchanged with the command that you want to run.
Security & setup warnings displays the "missing default phone region" after initial install
?Simply run the following command: sudo docker exec -it nextcloud-aio-nextcloud php occ config:system:set default_phone_region --value="yourvalue"
. Of course you need to modify yourvalue
based on your location. Examples are DE
, EN
and GB
. See this list for more codes: https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Officially_assigned_code_elements
See multiple-instances.md for some documentation on this.
Nextcloud features a built-in bruteforce protection which may get triggered and will block an ip-address or disable a user. You can unblock an ip-address by running sudo docker exec -it nextcloud-aio-nextcloud php occ security:bruteforce:reset <ip-address>
and enable a disabled user by running sudo docker exec -it nextcloud-aio-nextcloud php occ user:enable <name of user>
. See https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#security for further information.
This project values stability over new features. That means that when a new major Nextcloud update gets introduced, we will wait at least until the first patch release, e.g. 24.0.1
is out before upgrading to it. Also we will wait with the upgrade until all important apps are compatible with the new major version. Minor or patch releases for Nextcloud and all dependencies as well as all containers will be updated to new versions as soon as possible but we try to give all updates first a good test round before pushing them. That means that it can take around 2 weeks before new updates reach the latest
channel. If you want to help testing, you can switch to the beta
channel by following this documentation which will also give you the updates earlier.
You can switch to a different channel like e.g. the beta channel or from the beta channel back to the latest channel by stopping the mastercontainer, removing it (no data will be lost) and recreating the container using the same command that you used initially to create the mastercontainer. For the beta channel on x64 you need to change the last line nextcloud/all-in-one:latest
to nextcloud/all-in-one:beta
and vice versa. For arm64 it is nextcloud/all-in-one:latest-arm64
and nextcloud/all-in-one:beta-arm64
, respectively.
If we push new containers to latest
, you will see in the AIO interface below the containers
section that new container updates were found. In this case, just press Stop containers
and Start containers
in order to update the containers. The mastercontainer has its own update procedure though. See below. And don't forget to back up the current state of your instance using the built-in backup solution before starting the containers again! Otherwise you won't be able to restore your instance easily if something should break during the update.
If a new Mastercontainer
update was found, you'll see an additional section below the containers
section which shows that a mastercontainer update is available. If so, you can simply press on the button to update the container.
Additionally, there is a cronjob that runs once a day that checks for container and mastercontainer updates and sends a notification to all Nextcloud admins if a new update was found.
If your Nextcloud is running and you are logged in as admin in your Nextcloud, you can easily log in to the AIO interface by opening https://yourdomain.tld/settings/admin/overview
which will show a button on top that enables you to log in to the AIO interface by just clicking on this button. Note: You can change the domain/ip-address/port of the button by simply stopping the containers, visiting the AIO interface from the correct and desired domain/ip-address/port and clicking once on Start containers
.
⚠️ Please note: Editing the configuration.json manually and making a mistake may break your instance so please create a backup first!
If you set up a new AIO instance, you need to enter a domain. Currently there is no way to change this domain afterwards from the AIO interface. So in order to change it, you need to edit the configuration.json manually that is most likely stored in /var/lib/docker/volumes/nextcloud_aio_mastercontainer/_data/data/configuration.json
, subsitute each occurrence of your old domain with your new domain and save and write out the file. Afterwards restart your containers from the AIO interface and everything should work as expected if the new domain is correctly configured.
If you are running AIO behind a reverse proxy, you need to obviously also change the domain in your reverse proxy config.
If something goes unexpected routes during the initial installation, you might want to reset the AIO installation to be able to start from scratch.
Please note: if you already have it running and have data on your instance, you should not follow these instructions as it will delete all data that is coupled to your AIO instance.
Here is how to reset the AIO instance properly:
sudo docker stop nextcloud-aio-mastercontainer
sudo docker stop nextcloud-aio-domaincheck
sudo docker ps --filter "status=exited"
sudo docker container prune
sudo docker network rm nextcloud-aio
sudo docker volume ls --filter "dangling=true"
sudo docker volume prune
(on Windows you might need to remove some volumes afterwards manually with docker volume rm nextcloud_aio_backupdir
, docker volume rm nextcloud_aio_nextcloud_datadir
). Also if you've configured NEXTCLOUD_DATADIR
to a path on your host instead of the default volume, you need to clean that up as well.sudo docker image prune -a
.Nextcloud AIO provides a local backup solution based on BorgBackup. These backups act as a local restore point in case the installation gets corrupted.
It is recommended to create a backup before any container update. By doing this, you will be safe regarding any possible complication during updates because you will be able to restore the whole instance with basically one click.
If you connect an external drive to your host, and choose the backup directory to be on that drive, you are also kind of safe against drive failures of the drive where the docker volumes are stored on.
How to do the above step for step
/mnt/backup
./mnt/backup
.Create Backup
which should create the first backup on the external disk.Backups can be created and restored in the AIO interface using the buttons Create Backup
and Restore selected backup
. Additionally, a backup check is provided that checks the integrity of your backups but it shouldn't be needed in most situations.
The backups itself get encrypted with an encryption key that gets shown to you in the AIO interface. Please save that at a safe place as you will not be able to restore from backup without this key.
Be aware that this solution does not back up files and folders that are mounted into Nextcloud using the external storage app.
Note that this implementation does not provide remote backups, for this you can use the backup app.
If you are running AIO in a LXC container, you need to make sure that FUSE is enabled in the LXC container settings. Otherwise the backup container will not be able to start as FUSE is required for it to work.
You can open the BorgBackup archives on your host by following these steps:
(instructions for Ubuntu Desktop)
# Install borgbackup on the host
sudo apt update && sudo apt install borgbackup
# Mount the archives to /tmp/borg (if you are using the default backup location /mnt/backup/borg)
sudo mkdir -p /tmp/borg && sudo borg mount "/mnt/backup/borg" /tmp/borg
# After entering your repository key successfully, you should be able to access all archives in /tmp/borg
# You can now do whatever you want by syncing them to a different place using rsync or doing other things
# E.g. you can open the file manager on that location by running:
xhost +si:localuser:root && sudo nautilus /tmp/borg
# When you are done, simply close the file manager and run the following command to unmount the backup archives:
sudo umount /tmp/borg
You can delete BorgBackup archives on your host manually by following these steps:
(instructions for Debian based OS' like Ubuntu)
# Install borgbackup on the host
sudo apt update && sudo apt install borgbackup
# List all archives (if you are using the default backup location /mnt/backup/borg)
sudo borg list "/mnt/backup/borg"
# After entering your repository key successfully, you should now see a list of all backup archives
# An example backup archive might be called 20220223_174237-nextcloud-aio
# Then you can simply delete the archive with:
sudo borg delete --stats --progress "/mnt/backup/borg::20220223_174237-nextcloud-aio"
After doing so, make sure to update the backup archives list in the AIO interface!
You can do so by clicking on the Check backup integrity
button or Create backup
button.
For increased backup security, you might consider syncing the backup repository regularly to another drive.
To do that, first add the drive to /etc/fstab
so that it is able to get automatically mounted and then create a script that does all the things automatically. Here is an example for such a script:
Click here to expand
#!/bin/bash
# Please modify all variables below to your needings:
SOURCE_DIRECTORY="/mnt/backup/borg"
DRIVE_MOUNTPOINT="/mnt/backup-drive"
TARGET_DIRECTORY="/mnt/backup-drive/borg"
########################################
# Please do NOT modify anything below! #
########################################
if [ "$EUID" -ne 0 ]; then
echo "Please run as root"
exit 1
fi
if ! [ -d "$SOURCE_DIRECTORY" ]; then
echo "The source directory does not exist."
exit 1
fi
if [ -z "$(ls -A "$SOURCE_DIRECTORY/")" ]; then
echo "The source directory is empty which is not allowed."
exit 1
fi
if ! [ -d "$DRIVE_MOUNTPOINT" ]; then
echo "The drive mountpoint must be an existing directory"
exit 1
fi
if ! grep -q " $DRIVE_MOUNTPOINT " /etc/fstab; then
echo "Could not find the drive mountpoint in the fstab file. Did you add it there?"
exit 1
fi
if ! mountpoint -q "$DRIVE_MOUNTPOINT"; then
mount "$DRIVE_MOUNTPOINT"
if ! mountpoint -q "$DRIVE_MOUNTPOINT"; then
echo "Could not mount the drive. Is it connected?"
exit 1
fi
fi
if [ -f "$SOURCE_DIRECTORY/lock.roster" ]; then
echo "Cannot run the script as the backup archive is currently changed. Please try again later."
exit 1
fi
mkdir -p "$TARGET_DIRECTORY"
if ! [ -d "$TARGET_DIRECTORY" ]; then
echo "Could not create target directory"
exit 1
fi
if [ -f "$SOURCE_DIRECTORY/aio-lockfile" ]; then
echo "Not continuing because aio-lockfile already exists."
exit 1
fi
touch "$SOURCE_DIRECTORY/aio-lockfile"
if ! rsync --stats --archive --human-readable --delete "$SOURCE_DIRECTORY/" "$TARGET_DIRECTORY"; then
echo "Failed to sync the backup repository to the target directory."
exit 1
fi
rm "$SOURCE_DIRECTORY/aio-lockfile"
rm "$TARGET_DIRECTORY/aio-lockfile"
umount "$DRIVE_MOUNTPOINT"
if docker ps --format "{{.Names}}" | grep "^nextcloud-aio-nextcloud$"; then
docker exec -it nextcloud-aio-nextcloud bash /notify.sh "Rsync backup successful!" "Synced the backup repository successfully."
else
echo "Synced the backup repository successfully."
fi
You can simply copy and past the script into a file e.g. named backup-script.sh
e.g. here: /root/backup-script.sh
. Do not forget to modify the variables to your requirements!
Afterwards apply the correct permissions with sudo chown root:root /root/backup-script.sh
and sudo chmod 700 /root/backup-script.sh
. Then you can create a cronjob that runs e.g. at 20:00
each week on Sundays like this:
sudo crontab -u root -e
(and choose your editor of choice if not already done. I'd recommend nano).0 20 * * 7 /root/backup-script.sh
which will run the script at 20:00 on Sundays each week.Ctrl + o
-> Enter
and close the editor with Ctrl + x
).You can do so by running the /daily-backup.sh
script that is stored in the mastercontainer. It accepts the following environmental varilables:
AUTOMATIC_UPDATES
if set to 1
, it will automatically stop the containers, update them and start them including the mastercontainer. If the mastercontainer gets updated, this script's execution will stop as soon as the mastercontainer gets stopped. You can then wait until it is started again and run the script with this flag again in order to update all containers correctly afterwards.DAILY_BACKUP
if set to 1
, it will automatically stop the containers and create a backup. If you want to start them again afterwards, you may have a look at the START_CONTAINERS
option. Please be aware that this option is non-blocking if START_CONTAINERS
and AUTOMATIC_UPDATES
is not enabled at the same time which means that the backup check is not done when the process is finished since it only start the borgbackup container with the correct configuration.START_CONTAINERS
if set to 1
, it will automatically start the containers without updating them.STOP_CONTAINERS
if set to 1
, it will automatically stop the containers.CHECK_BACKUP
if set to 1
, it will start the backup check. This is not allowed to be enabled at the same time like DAILY_BACKUP
. Please be aware that this option is non-blocking which means that the backup check is not done when the process is finished since it only start the borgbackup container with the correct configuration.One example for this would be sudo docker exec -it -e DAILY_BACKUP=1 nextcloud-aio-mastercontainer /daily-backup.sh
, which you can run via a cronjob or put it in a script.
⚠️ Please note that none of the option returns error codes. So you need to check for the correct result yourself.
If you already have a backup solution in place, you may want to hide the backup section. You can do so by adding -e DISABLE_BACKUP_SECTION=true
to the initial startup of the mastercontainer.
⚠️ Attention: It is very important to change the datadir before Nextcloud is installed/started the first time and not to change it afterwards! If you still want to do it afterwards, see this on how to do it.
You can configure the Nextcloud container to use a specific directory on your host as data directory. You can do so by adding the environmental variable NEXTCLOUD_DATADIR
to the initial startup of the mastercontainer. Allowed values for that variable are strings that start with /
and are not equal to /
.
-e NEXTCLOUD_DATADIR="/mnt/ncdata"
.-e NEXTCLOUD_DATADIR="/var/nextcloud-data"
-e NEXTCLOUD_DATADIR="/volume1/docker/nextcloud/data"
.-e NEXTCLOUD_DATADIR="nextcloud_aio_nextcloud_datadir"
. In order to use this, you need to create the nextcloud_aio_nextcloud_datadir
volume beforehand:docker volume create ^
--driver local ^
--name nextcloud_aio_nextcloud_datadir ^
-o device="/host_mnt/c/your/data/path" ^
-o type="none" ^
-o o="bind"
/host_mnt/c/your/data/path
in this example would be equivalent to C:\your\data\path
on the Windows host. So you need to translate the path that you want to use into the correct format.) ⚠️️ Attention: Make sure that the path exists on the host before you create the volume! Otherwise everything will bug out!⚠️ Please make sure to apply the correct permissions to the chosen directory before starting Nextcloud the first time (not needed on Windows).
sudo chown -R 33:0 /mnt/ncdata
and sudo chmod -R 750 /mnt/ncdata
.sudo chown -R 33:0 /var/nextcloud-data
and sudo chmod -R 750 /var/nextcloud-data
.sudo chown -R 33:0 /volume1/docker/nextcloud/data
and sudo chmod -R 750 /volume1/docker/nextcloud/data
By default, the Nextcloud container is confined and cannot access directories on the host OS. You might want to change this when you are planning to use local external storage in Nextcloud to store some files outside the data directory and can do so by adding the environmental variable NEXTCLOUD_MOUNT
to the initial startup of the mastercontainer. Allowed values for that variable are strings that start with /
and are not equal to /
.
-e NEXTCLOUD_MOUNT="/mnt/"
and -e NEXTCLOUD_MOUNT="/media/"
.-e NEXTCLOUD_MOUNT="/volume1/"
.After using this option, please make sure to apply the correct permissions to the directories that you want to use in Nextcloud. E.g. sudo chown -R 33:0 /mnt/your-drive-mountpoint
and sudo chmod -R 750 /mnt/your-drive-mountpoint
should make it work on Linux when you have used -e NEXTCLOUD_MOUNT="/mnt/"
.
You can then navigate to the apps management page, activate the external storage app, navigate to https://your-nc-domain.com/settings/admin/externalstorages
and add a local external storage directory that will be accessible inside the container at the same place that you've entered. E.g. /mnt/your-drive-mountpoint
will be mounted to /mnt/your-drive-mountpoint
inside the container, etc.
Be aware though that these locations will not be covered by the built-in backup solution!
By default will the talk container use port 3478/UDP
and 3478/TCP
for connections. You can adjust the port by adding e.g. -e TALK_PORT=3478
to the initial docker run command and adjusting the port to your desired value.
By default are uploads to Nextcloud limited to a max of 10G. You can adjust the upload limit by providing -e NEXTCLOUD_UPLOAD_LIMIT=10G
to the docker run command of the mastercontainer and customize the value to your fitting. It must start with a number and end with G
e.g. 10G
.
By default are uploads to Nextcloud limited to a max of 3600s. You can adjust the upload time limit by providing -e NEXTCLOUD_MAX_TIME=3600
to the docker run command of the mastercontainer and customize the value to your fitting. It must be a number e.g. 3600
.
If you get an error during the domain validation which states that your ip-address is an internal or reserved ip-address, you can fix this by first making sure that your domain indeed has the correct public ip-address that points to the server and then adding --add-host yourdomain.com:<public-ip-address>
to the initial docker run command which will allow the domain validation to work correctly. And so that you know: even if the A
record of your domain should change over time, this is no problem since the mastercontainer will not make any attempt to access the chosen domain after the initial domain validation.
You can run AIO also with docker rootless. How to do this is documented here: docker-rootless.md
When your containers run for a few days without a restart, the container logs that you can view from the AIO interface can get really huge. You can limit the loge sizes by enabling logrotate for docker container logs. Feel free to enable this by following those instructions: https://sandro-keil.de/blog/logrotate-for-docker-container/
The files and folders that you add to Nextcloud are by default stored in the following directory: /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/
on the host. If needed, you can modify/add/delete files/folders there but ATTENTION: be very careful when doing so because you might corrupt your AIO installation! Best is to create a backup using the built-in backup solution before editing/changing files/folders in there because you will then be able to restore your instance to the backed up state.
After you are done modifying/adding/deleting files/folders, don't forget to apply the correct permissions by running: sudo chown -R 33:0 /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/*
and sudo chmod -R 750 /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/*
and rescan the files with sudo docker exec -it nextcloud-aio-nextcloud php occ files:scan --all
.
You can move the whole docker library and all its files including all Nextcloud AIO files and folders to a separate drive by first mounting the drive in the host OS (NTFS is not supported) and then following this tutorial: https://www.guguweb.com/2019/02/07/how-to-move-docker-data-directory-to-another-location-on-ubuntu/
(Of course docker needs to be installed first for this to work.)
You can edit Nextclouds config.php file directly from the host with your favorite text editor. E.g. like this: sudo nano /var/lib/docker/volumes/nextcloud_aio_nextcloud/_data/config/config.php
. Make sure to not break the file though which might corrupt your Nextcloud instance otherwise. In best case, create a backup using the built-in backup solution before editing the file.
If you want to define a custom skeleton directory, you can do so by putting your skeleton files into /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/skeleton/
, applying the correct permissions with sudo chown -R 33:0 /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/skeleton
and and sudo chmod -R 750 /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/*
and setting the skeleton directory option with sudo docker exec -it nextcloud-aio-nextcloud php occ config:system:set skeletondirectory --value="/mnt/ncdata/skeleton"
. You can read further on this option here: click here
You can configure your server to block certain ip-addresses using fail2ban as bruteforce protection. Here is how to set it up: https://docs.nextcloud.com/server/stable/admin_manual/installation/harden_server.html#setup-fail2ban. The logpath of AIO is by default /var/lib/docker/volumes/nextcloud_aio_nextcloud/_data/data/nextcloud.log
. Do not forget to add chain=DOCKER-USER
to your nextcloud jail config (nextcloud.local
) otherwise the nextcloud service running on docker will still be accessible even if the IP is banned. Also, you may change the blocked ports to cover all AIO ports: by default 80,443,8080,8443,3478
(see this)
It is possible to connect to an existing LDAP server. You need to make sure that the LDAP server is reachable from the Nextcloud container. Then you can enable the LDAP app and configure LDAP in Nextcloud manually. If you don't have a LDAP server yet, recommended is to use this docker container: https://hub.docker.com/r/nitnelave/lldap. Make sure here as well that Nextcloud can talk to the LDAP server. The easiest way is by adding the LDAP docker container to the docker network nextcloud-aio
. Then you can connect to the LDAP container by its name from the Nextcloud container.
Netdata allows you to monitor your server using a GUI. You can install it by following https://learn.netdata.cloud/docs/agent/packaging/docker#create-a-new-netdata-agent-container.
If you want to use the user_sql app, the easiest way is to create an additional database container and add it to the docker network nextcloud-aio
. Then the Nextcloud container should be able to talk to the database container using its name.
It is possible to install any of these to get a GUI for your AIO database. The pgAdmin container is recommended. You can get some docs on it here: https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html. For the container to connect to the aio-database, you need to connect the container to the docker network nextcloud-aio
and use nextcloud-aio-database
as database host, oc_nextcloud
as database username and the password that you get when running sudo grep dbpassword /var/lib/docker/volumes/nextcloud_aio_nextcloud/_data/config/config.php
as the password.
You can configure one yourself by using either of these three recommended projects: Docker Mailserver, Maddy Mail Server or Mailcow. Docker Mailserver and Maddy Mail Server are probably a bit easier to set up as it is possible to run them using only one container but Mailcow has much more features.
Please see the following documentation on this: migration.md
For integrating new containers, they must pass specific requirements for being considered to get integrated in AIO itself. Even if not considered, we may add some documentation on it.
What are the requirements?
For some applications it might be necessary to enstablish a secured connection to a host / server which is using a certificated issued by a Certification Authority that is not trusted out of the box. An example could be configuring LDAPS against the Domain Controller (ActiveDirectory) of an organization
You can make the Nextcloud container trust any Certification Authority by providing the environmental variable TRUSTED_CACERTS_DIR
when starting the AIO-mastercontainer. The value of the variables should be set to the absolute path to a directory on the host, which contains one or more Certification Authority's certificate. You should use X.509 certificates, Base64 encoded. (Other formats may work but have not been tested!) All the certificates in the directory will be trusted.
When using docker run
, the environmental variable can be set with -e TRUSTED_CACERTS_DIR=/path/to/my/cacerts
.
In order for the value to be valid, the path should start with /
and not end with '/' and point to an existing directory. Pointing the variable directly to a certificate file will not work and may also break things.
The Collabora container enables Seccomp by default, which is a security feature of the Linux kernel. On systems without this kernel feature enabled, you need to provide -e COLLABORA_SECCOMP_DISABLED=true
to the initial docker run command in order to make it work.
Author: Nextcloud
Source Code: https://github.com/nextcloud/all-in-one
License: AGPL-3.0 license
1661908980
This is a package that makes backing up your mongo databases to S3 simple. The binary file is a node cronjob that runs at midnight every day and backs up the database specified in the config file.
npm install mongodb_s3_backup -g
To configure the backup, you need to pass the binary a JSON configuration file. There is a sample configuration file supplied in the package (config.sample.json
). The file should have the following format:
{
"mongodb": {
"host": "localhost",
"port": 27017,
"username": false,
"password": false,
"db": "database_to_backup"
},
"s3": {
"key": "your_s3_key",
"secret": "your_s3_secret",
"bucket": "s3_bucket_to_upload_to",
"destination": "/",
"encrypt": true,
"region": "s3_region_to_use"
},
"cron": {
"time": "11:59",
}
}
All options in the "s3" object, except for desination, will be directly passed to knox, therefore, you can include any of the options listed in the knox documentation.
You may optionally substitute the cron "time" field with an explicit "crontab" of the standard format 0 0 * * *
.
"cron": {
"crontab": "0 0 * * *"
}
Note: The version of cron that we run supports a sixth digit (which is in seconds) if you need it.
The optional "timezone" allows you to specify timezone-relative time regardless of local timezone on the host machine.
"cron": {
"time": "00:00",
"timezone": "America/New_York"
}
You must first npm install time
to use "timezone" specification.
To start a long-running process with scheduled cron job:
mongodb_s3_backup <path to config file>
To execute a backup immediately and exit:
mongodb_s3_backup -n <path to config file>
Author: Theycallmeswift
Source Code: https://github.com/theycallmeswift/node-mongodb-s3-backup
1658793060
This Laravel package creates a backup of your application. The backup is a zip file that contains all files in the directories you specify along with a dump of your database. The backup can be stored on any of the filesystems you have configured in Laravel.
Feeling paranoid about backups? No problem! You can backup your application to multiple filesystems at once.
Once installed taking a backup of your files and databases is very easy. Just issue this artisan command:
php artisan backup:run
But we didn't stop there. The package also provides a backup monitor to check the health of your backups. You can be notified via several channels when a problem with one of your backups is found. To avoid using excessive disk space, the package can also clean up old backups.
This package requires PHP 8.0 and Laravel 8.0 or higher. You'll find installation instructions and full documentation on https://spatie.be/docs/laravel-backup.
If you are on a PHP version below 8.0 or a Laravel version below 8.0 just use an older version of this package.
Read the extensive documentation on version 3, on version 4, on version 5 and on version 6. We won't introduce new features to v6 and below anymore but we will still fix bugs.
Run the tests with:
composer test
Please see CHANGELOG for more information on what has changed recently.
Please see CONTRIBUTING for details.
If you discover any security-related issues, please email security@spatie.be instead of using the issue tracker.
You're free to use this package, but if it makes it to your production environment we highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using.
Our address is: Spatie, Kruikstraat 22, 2018 Antwerp, Belgium.
We publish all received postcards on our company website.
And a special thanks to Caneco for the logo ✨
We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.
We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You'll find our address on our contact page. We publish all received postcards on our virtual postcard wall.
Author: Spatie
Source Code: https://github.com/spatie/laravel-backup
License: MIT license
1658748120
This package provides a framework-agnostic database backup manager for dumping to and restoring databases from S3, Dropbox, FTP, SFTP, and Rackspace Cloud.
MySQL
and PostgreSQL
Gzip
Configure your databases.
// config/database.php
'development' => [
'type' => 'mysql',
'host' => 'localhost',
'port' => '3306',
'user' => 'root',
'pass' => 'password',
'database' => 'test',
// If singleTransaction is set to true, the --single-transcation flag will be set.
// This is useful on transactional databases like InnoDB.
// http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_single-transaction
'singleTransaction' => false,
// Do not dump the given tables
// Set only table names, without database name
// Example: ['table1', 'table2']
// http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_ignore-table
'ignoreTables' => [],
// using ssl to connect to your database - active ssl-support (mysql only):
'ssl'=>false,
// add additional options to dump-command (like '--max-allowed-packet')
'extraParams'=>null,
],
'production' => [
'type' => 'postgresql',
'host' => 'localhost',
'port' => '5432',
'user' => 'postgres',
'pass' => 'password',
'database' => 'test',
],
Configure your filesystems.
// config/storage.php
'local' => [
'type' => 'Local',
'root' => '/path/to/working/directory',
],
's3' => [
'type' => 'AwsS3',
'key' => '',
'secret' => '',
'region' => 'us-east-1',
'version' => 'latest',
'bucket' => '',
'root' => '',
'use_path_style_endpoint' => false,
],
'b2' => [
'type' => 'B2',
'key' => '',
'accountId' => '',
'bucket' => '',
],
'gcs' => [
'type' => 'Gcs',
'key' => '',
'secret' => '',
'version' => 'latest',
'bucket' => '',
'root' => '',
],
'rackspace' => [
'type' => 'Rackspace',
'username' => '',
'key' => '',
'container' => '',
'zone' => '',
'root' => '',
],
'dropbox' => [
'type' => 'DropboxV2',
'token' => '',
'key' => '',
'secret' => '',
'app' => '',
'root' => '',
],
'ftp' => [
'type' => 'Ftp',
'host' => '',
'username' => '',
'password' => '',
'root' => '',
'port' => 21,
'passive' => true,
'ssl' => true,
'timeout' => 30,
],
'sftp' => [
'type' => 'Sftp',
'host' => '',
'username' => '',
'password' => '',
'root' => '',
'port' => 21,
'timeout' => 10,
'privateKey' => '',
],
'flysystem' => [
'type' => 'Flysystem',
'name' => 's3_backup',
//'prefix' => 'upload',
],
'doSpaces' => [
'type' => 'AwsS3',
'key' => '',
'secret' => '',
'region' => '',
'bucket' => '',
'root' => '',
'endpoint' => '',
'use_path_style_endpoint' => false,
],
'webdav' => [
'type' => 'Webdav',
'baseUri' => 'http://myserver.com',
'userName' => '',
'password' => '',
'prefix' => '',
],
Backup to / restore from any configured database.
Backup the development database to Amazon S3
. The S3 backup path will be test/backup.sql.gz
in the end, when gzip
is done with it.
use BackupManager\Filesystems\Destination;
$manager = require 'bootstrap.php';
$manager->makeBackup()->run('development', [new Destination('s3', 'test/backup.sql')], 'gzip');
Backup to / restore from any configured filesystem.
Restore the database file test/backup.sql.gz
from Amazon S3
to the development
database.
$manager = require 'bootstrap.php';
$manager->makeRestore()->run('s3', 'test/backup.sql.gz', 'development', 'gzip');
This package does not allow you to backup from one database type and restore to another. A MySQL dump is not compatible with PostgreSQL.
mysqldump
and mysql
command-line binariespg_dump
and psql
command-line binariesgzip
and gunzip
command-line binariesComposer
Run the following to include this via Composer
composer require backup-manager/backup-manager
Then, you'll need to select the appropriate packages for the adapters that you want to use.
# to support s3
composer require league/flysystem-aws-s3-v3
# to support b2
composer require mhetreramesh/flysystem-backblaze
# to support google cs
composer require league/flysystem-aws-s3-v2
# to install the preferred dropbox v2 driver
composer required spatie/flysystem-dropbox
# to install legacy dropbox v2 driver
composer require srmklive/flysystem-dropbox-v2
# to support rackspace
composer require league/flysystem-rackspace
# to support sftp
composer require league/flysystem-sftp
# to support webdav (supported by owncloud nad many other)
composer require league/flysystem-webdav
Once installed, the package must be bootstrapped (initial configuration) before it can be used.
We've provided a native PHP example here.
The required bootstrapping can be found in the example here.
We recommend using the vagrant configuration supplied with this package for development and contribution. Simply install VirtualBox, Vagrant, and Ansible then run vagrant up
in the root folder. A virtualmachine specifically designed for development of the package will be built and launched for you.
When contributing please consider the following guidelines:
Interface
, Traits should NOT be suffixed with Trait
.This package is maintained by Shawn McCool and you!
Remove support for symfony 2. Specifically symfony/process versions < 3.x
Author: Backup-manager
Source Code: https://github.com/backup-manager/backup-manager
License: MIT license
1658740620
Laravel Driver for the Database Backup Manager
This package pulls in the framework agnostic Backup Manager and provides seamless integration with Laravel.
Note: This package is for Laravel integration only. For information about the framework-agnostic core package (or the Symfony driver) please see the base package repository.
It's stable enough, you'll need to understand filesystem permissions.
This package is being actively developed, and we would like to get feedback to improve it. Please feel free to submit feedback.
mysqldump
and mysql
command-line binariespg_dump
and psql
command-line binariesgzip
and gunzip
command-line binariesComposer
Run the following to include this via Composer
composer require backup-manager/laravel
Then, you'll need to select the appropriate packages for the adapters that you want to use.
# to support s3 or google cs
composer require league/flysystem-aws-s3-v3
# to support dropbox
composer require srmklive/flysystem-dropbox-v2
# to support rackspace
composer require league/flysystem-rackspace
# to support sftp
composer require league/flysystem-sftp
# to support gcs
composer require superbalist/flysystem-google-storage
To install into a Laravel project, first do the composer install then add *ONE *of the following classes to your config/app.php service providers list.
// FOR LARAVEL 5.5 +
BackupManager\Laravel\Laravel55ServiceProvider::class,
Publish the storage configuration file.
php artisan vendor:publish --provider="BackupManager\Laravel\Laravel55ServiceProvider"
The Backup Manager will make use of Laravel's database configuration. But, it won't know about any connections that might be tied to other environments, so it can be best to just list multiple connections in the config/database.php
file.
We can also add extra parameters on our backup manager commands by configuring extra params on .env
file:
BACKUP_MANAGER_EXTRA_PARAMS="--column-statistics=0 --max-allowed-packet"
To install into a Lumen project, first do the composer install then add the configuration file loader and ONE of the following service providers to your bootstrap/app.php
.
// FOR LUMEN 5.5 AND ABOVE
$app->configure('backup-manager');
$app->register(BackupManager\Laravel\Lumen55ServiceProvider::class);
Copy the vendor/backup-manager/laravel/config/backup-manager.php
file to config/backup-manager.php
and configure it to suit your needs.
IoC Resolution
BackupManager\Manager
can be automatically resolved through constructor injection thanks to Laravel's IoC container.
use BackupManager\Manager;
public function __construct(Manager $manager) {
$this->manager = $manager;
}
It can also be resolved manually from the container.
$manager = App::make(\BackupManager\Manager::class);
Artisan Commands
There are three commands available db:backup
, db:restore
and db:list
.
All will prompt you with simple questions to successfully execute the command.
Example Command for 24hour scheduled cronjob
php artisan db:backup --database=mysql --destination=dropbox --destinationPath=project --timestamp="d-m-Y" --compression=gzip
This command will backup your database to dropbox using mysql and gzip compresion in path /backups/project/DATE.gz (ex: /backups/project/31-7-2015.gz)
It's possible to schedule backups using Laravel's scheduler.
/**
* Define the application's command schedule.
*
* @param \Illuminate\Console\Scheduling\Schedule $schedule
* @return void
*/
protected function schedule(Schedule $schedule) {
$environment = config('app.env');
$schedule->command(
"db:backup --database=mysql --destination=s3 --destinationPath=/{$environment}/projectname --timestamp="Y_m_d_H_i_s" --compression=gzip"
)->twiceDaily(13,21);
}
We recommend using the vagrant configuration supplied with this package for development and contribution. Simply install VirtualBox, Vagrant, and Ansible then run vagrant up
in the root folder. A virtualmachine specifically designed for development of the package will be built and launched for you.
When contributing please consider the following guidelines:
if ( ! is_null(...)) {
.Interface
, Traits should NOT be suffixed with Trait
.This package is maintained by Shawn McCool and open-source heroes.
2.0
Released on 2020-04-30
Remove support for all Laravel versions below 5.5. All older versions should use the backup-manager ^1.0
.
Since so many dependencies in Laravel / Symfony have changed it became impossible to support newer versions in the same code-base. Release ^1.0
is stable and is always accepting new stability fixes (we haven't seen anything to fix in a long time).
Watch a video tour to get an idea what is possible with this package.
Author: Backup-manager
Source Code: https://github.com/backup-manager/laravel
License: MIT license
1656759060
DatabaseBackup is a CakePHP plugin to export, import and manage database backups. Currently, the plugin supports MySql, Postgres and Sqlite databases.
You can install the plugin via composer:
$ composer require --prefer-dist mirko-pagliai/cakephp-database-backup
Then you have to load the plugin. For more information on how to load the plugin, please refer to the Cookbook.
Simply, you can execute the shell command to enable the plugin:
bin/cake plugin load DatabaseBackup
This would update your application's bootstrap method.
By default the plugin uses the APP/backups
directory to save the backups files. So you have to create the directory and make it writable:
$ mkdir backups/ && chmod 775 backups/
If you want to use a different directory, read the Configuration section.
Recent packages and the master branch require at least CakePHP 4.0 and PHP 7.2. Instead, the cakephp3 branch requires at least PHP 5.6.
In this case, you can install the package as well:
$ composer require --prefer-dist mirko-pagliai/cakephp-database-backup:dev-cakephp3
Note that the cakephp3
branch will no longer be updated as of April 29, 2021, except for security patches, and it matches the 2.8.5 version.
DatabaseBackup requires:
mysql
and mysqldump
for MySql databases;pg_dump
and pg_restore
for Postgres databases;sqlite3
for Sqlite databases.Optionally, if you want to handle compressed backups, bzip2
and gzip
are also required.
The installation of these binaries may vary depending on your operating system.
Please forward, remember that the database user must have the correct permissions (for example, for mysql
the user must have the LOCK TABLES
permission).
The plugin uses some configuration parameters. See our wiki:
If you want to send backup files by email, remember to set up your application correctly so that it can send emails. For more information on how to configure your application, see the Cookbook.
See our wiki:
And refer to our API.
Tests are run for only one driver at a time, by default mysql
. To choose another driver to use, you can set the driver_test
environment variable before running phpunit
.
For example:
driver_test=sqlite vendor/bin/phpunit
driver_test=postgres vendor/bin/phpunit
Alternatively, you can set the db_dsn
environment variable, indicating the connection parameters. In this case, the driver type will still be detected automatically.
For example:
db_dsn=sqlite:///' . TMP . 'example.sq3 vendor/bin/phpunit
For transparency and insight into our release cycle and to maintain backward compatibility, DatabaseBackup will be maintained under the Semantic Versioning guidelines.
Did you like this plugin? Its development requires a lot of time for me. Please consider the possibility of making a donation: even a coffee is enough! Thank you.
Author: Mirko-pagliai
Source Code: https://github.com/mirko-pagliai/cakephp-database-backup
License: View license
1655100960
restic is a backup program that is fast, efficient and secure. It supports the three major operating systems (Linux, macOS, Windows) and a few smaller ones (FreeBSD, OpenBSD).
Once you've installed restic, start off with creating a repository for your backups:
$ restic init --repo /tmp/backup
enter password for new backend:
enter password again:
created restic backend 085b3c76b9 at /tmp/backup
Please note that knowledge of your password is required to access the repository.
Losing your password means that your data is irrecoverably lost.
and add some data:
$ restic --repo /tmp/backup backup ~/work
enter password for repository:
scan [/home/user/work]
scanned 764 directories, 1816 files in 0:00
[0:29] 100.00% 54.732 MiB/s 1.582 GiB / 1.582 GiB 2580 / 2580 items 0 errors ETA 0:00
duration: 0:29, 54.47MiB/s
snapshot 40dc1520 saved
Next you can either use restic restore
to restore files or use restic mount
to mount the repository via fuse and browse the files from previous snapshots.
For more options check out the online documentation.
Backends
Saving a backup on the same machine is nice but not a real backup strategy. Therefore, restic supports the following backends for storing backups natively:
Design Principles
Restic is a program that does backups right and was designed with the following principles in mind:
Easy: Doing backups should be a frictionless process, otherwise you might be tempted to skip it. Restic should be easy to configure and use, so that, in the event of a data loss, you can just restore it. Likewise, restoring data should not be complicated.
Fast: Backing up your data with restic should only be limited by your network or hard disk bandwidth so that you can backup your files every day. Nobody does backups if it takes too much time. Restoring backups should only transfer data that is needed for the files that are to be restored, so that this process is also fast.
Verifiable: Much more important than backup is restore, so restic enables you to easily verify that all data can be restored.
Secure: Restic uses cryptography to guarantee confidentiality and integrity of your data. The location the backup data is stored is assumed not to be a trusted environment (e.g. a shared space where others like system administrators are able to access your backups). Restic is built to secure your data against such attackers.
Efficient: With the growth of data, additional snapshots should only take the storage of the actual increment. Even more, duplicate data should be de-duplicated before it is actually written to the storage back end to save precious backup space.
Reproducible Builds
The binaries released with each restic version starting at 0.6.1 are reproducible, which means that you can reproduce a byte identical version from the source code for that release. Instructions on how to do that are contained in the builder repository.
You can follow the restic project on Twitter @resticbackup or by subscribing to the project blog.
For detailed usage and installation instructions check out the documentation.
You can ask questions in our Discourse forum.
Author: Restic
Source Code: https://github.com/restic/restic
License: BSD-2-Clause license
1652775745
Muchos desarrolladores tienen un blog personal en Hashnode , una de las comunidades de blogs más populares para las personas que exploran la tecnología.
También escribo en Hashnode, pero recientemente estuve pensando si había alguna forma de hacer una copia de seguridad de mis artículos publicados en GitHub. Afortunadamente, encontré una manera fácil de hacerlo directamente desde Hashnode.
Entonces, en este artículo, compartiré el proceso para que también puedas crear una copia de seguridad automática de tus artículos en tu repositorio de GitHub. No hay nada de malo en probar algo nuevo, ¿verdad? 😊
✨ También creé un video completo que muestra el proceso de creación de esa copia de seguridad automática en su repositorio de GitHub. He adjuntado el video más adelante en este artículo. También puedes comprobar eso.
En primer lugar, déjame mostrarte mi cuenta de Hashnode. Esta es solo una muestra de un perfil típico allí.
He publicado solo un artículo (a día de hoy, 15 de mayo de 2022). Pero aún quiero crear un proceso de respaldo automático para esta cuenta para que cada vez que escriba algo nuevo, todos los artículos se copien en mi repositorio de GitHub seleccionado como respaldo.
Así que hagamos eso, ¿de acuerdo? 😁
Ve a tu cuenta de GitHub. Primero debemos crear un repositorio especial donde crearemos un proceso de respaldo automático.
Ahora simplemente haga clic en el +
botón en la parte superior derecha de la página web.
Ahora haz clic en Nuevo repositorio .
Ahora tenemos que crear un nuevo repositorio. El proceso es similar a la creación de cualquier otro repositorio.
Puede hacer que el repositorio sea Público o Privado como mejor le parezca. Lo estoy convirtiendo en un repositorio privado, pero no tienes que hacerlo si no quieres.
Asigne al repositorio el nombre que desee.
Para este último, puede seleccionar lo que desee. Para este artículo, lo mantendré simple como estaba.
Luego haz clic en Crear repositorio .
Después de eso, en realidad no necesita hacer nada en ese repositorio por ahora.
Por lo tanto, lo mantendré como está ahora.
Ahora, dirígete a tu cuenta de Hashnode.
Haga clic en el icono de su perfil en la parte superior derecha de la página de perfil.
Seleccione el blog que desea respaldar. Para mí, es https://fahimbinamin.hashnode.dev/ .
Haz clic en Panel de blogs.
El tablero del blog aparecerá ante ti.
Simplemente desplácese hacia abajo hasta que encuentre Copia de seguridad. Lo encontrará en la parte inferior izquierda de la página web. Haz clic en eso.
Encontrará la página de copia de seguridad de GitHub ahora.
Haz clic en Hacer una copia de seguridad de todas mis publicaciones.
Seleccione dónde desea instalarlo.
Como estoy involucrado en 5 organizaciones de GitHub en este momento, me las está sugiriendo todas. Como quiero crear la copia de seguridad en mi cuenta personal de GitHub, seleccionaré mi perfil FahimFBA .
Aparecerá la sección de instalación y autorización.
Por defecto, seleccionará "Todos los repositorios". Pero tenga en cuenta que no debe seleccionar Todos los repositorios, ya que eso reescribirá todos los repositorios que tiene actualmente en su cuenta personal de GitHub.
Seleccione "Solo seleccionar repositorios" y busque el repositorio que ha creado para hacer una copia de seguridad de sus blogs de Hashnode.
Simplemente haga clic y seleccione el repositorio.
Si todo está bien, puede hacer clic en "Instalar y autorizar", pero nuevamente tenga en cuenta que no debe seleccionar Todos los repositorios.
Proporcione la contraseña de su cuenta de GitHub si se la solicita.
Si lo redirigirá a Hashnode para continuar con el proceso de instalación.
Ya ha creado su copia de seguridad automática. Si desea hacer una copia de seguridad de las publicaciones existentes, debe hacer clic en "Hacer una copia de seguridad de todas mis publicaciones".
Hará una copia de seguridad de todas las publicaciones que tenía antes de crear la copia de seguridad automática.
Ahora, si simplemente actualiza la página web del repositorio de GitHub (el repositorio que creó solo para hacer una copia de seguridad de este blog de Hashnode), verá que también se ha actualizado.
Incluso puede leer el artículo directamente desde su repositorio respaldado.
¡Eso es todo!
Quiero agradecerles a todos desde el fondo de mi corazón por leer el artículo completo. Si quieres chatear conmigo, estoy disponible en Twitter y LinkedIn para eso.
También estoy disponible en:
GitHub: https://github.com/FahimFBA
Fuente: https://www.freecodecamp.org/news/how-to-backup-hashnode-articles-to-github/
1652775480
多くの開発者は、Hashnodeに個人的なブログを持っています。これは、テクノロジーを探求する人々に最も人気のあるブログコミュニティの1つです。
Hashnodeにも書いていますが、最近、公開した記事をGitHubにバックアップする方法があるかどうかを考えていました。幸い、Hashnode自体から直接それを行う簡単な方法を見つけました。
そのため、この記事では、GitHubリポジトリへの記事の自動バックアップも作成できるようにプロセスを共有します。何か新しいことを試みても害はありませんよね?😊
✨GitHubリポジトリへの自動バックアップを作成するプロセスを示す完全な長さのビデオも作成しました。この記事の後半でビデオを添付しました。それをチェックすることもできます。
まず、私のHashnodeアカウントをお見せしましょう。これは、そこにある典型的なプロファイルのほんの一例です。
私は1つの記事だけを公開しました(今日、2022年5月15日現在)。ただし、このアカウントの自動バックアッププロセスを作成して、何か新しいものを作成するたびに、すべての記事がバックアップとして選択したGitHubリポジトリにコピーされるようにします。
それでは、そうしましょう。😁
GitHubアカウントに移動します。最初に、自動バックアッププロセスを作成する特別なリポジトリを作成する必要があります。
+
次に、Webページの右上にあるボタンをクリックするだけです。
次に、[新しいリポジトリ]をクリックします。
次に、新しいリポジトリを作成する必要があります。このプロセスは、他のリポジトリを作成するのと似ています。
自分に合ったリポジトリをパブリックまたはプライベートにすることができます。私はそれをプライベートリポジトリにしていますが、必要がなければそうする必要はありません。
リポジトリに任意の名前を付けます。
後者の場合は、好きなものを選択できます。この記事では、そのままシンプルにしています。
次に、[リポジトリの作成]をクリックします。
その後、今のところ、そのリポジトリで実際に何もする必要はありません。
なので、今のままです。
次に、Hashnodeアカウントにアクセスします。
プロフィールページの右上にあるプロフィールアイコンをクリックします。
バックアップするブログを選択します。私にとってはhttps://fahimbinamin.hashnode.dev/です。
ブログダッシュボードをクリックします。
ブログダッシュボードが目の前に表示されます。
バックアップが見つかるまで下にスクロールするだけです。あなたはそれをウェブページの左下に見つけるでしょう。それをクリックします。
GitHubバックアップページが表示されます。
[すべての投稿をバックアップ]をクリックします。
インストールする場所を選択します。
私は現在5つのGitHub組織に参加しているので、それらすべてを私に提案しています。個人のGitHubアカウントでバックアップを作成したいので、プロファイルFahimFBAを選択します。
インストールと承認のセクションが表示されます。
デフォルトでは、「すべてのリポジトリ」が選択されます。ただし、[すべてのリポジトリ]を選択しないでください。選択すると、個人のGitHubアカウントに現在あるすべてのリポジトリが書き換えられます。
「リポジトリのみを選択」を選択し、Hashnodeブログをバックアップするために作成したリポジトリを見つけます。
リポジトリをクリックして選択するだけです。
すべて問題がなければ、[インストールして承認]をクリックできますが、[ すべてのリポジトリ]を選択してはならないことに注意してください。
要求された場合は、GitHubアカウントのパスワードを入力してください。
インストールプロセスを続行するためにHashnodeにリダイレクトされます。
これで、自動バックアップが作成されました。既存の投稿をバックアップする場合は、[すべての投稿をバックアップする]をクリックする必要があります。
自動バックアップを作成する前に、すべての投稿をバックアップします。
ここで、GitHubレポジトリWebページ(このHashnodeブログをバックアップするためだけに作成したレポジトリ)を更新するだけで、それも更新されていることがわかります。
バックアップしたリポジトリから直接記事を読むこともできます。
それでおしまい!
記事全体をお読みいただき、誠にありがとうございました。私とチャットしたい場合は、TwitterとLinkedInで利用できます。
私はまた利用できます:
GitHub:https ://github.com/FahimFBA
ソース:https ://www.freecodecamp.org/news/how-to-backup-hashnode-articles-to-github/
1650361380
Database Backup Manager
This package provides a framework-agnostic database backup manager for dumping to and restoring databases from S3, Dropbox, FTP, SFTP, and Rackspace Cloud.
Watch a video tour showing the Laravel driver in action to give you an idea what is possible.
MySQL
and PostgreSQL
Gzip
Configure your databases.
// config/database.php
'development' => [
'type' => 'mysql',
'host' => 'localhost',
'port' => '3306',
'user' => 'root',
'pass' => 'password',
'database' => 'test',
// If singleTransaction is set to true, the --single-transcation flag will be set.
// This is useful on transactional databases like InnoDB.
// http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_single-transaction
'singleTransaction' => false,
// Do not dump the given tables
// Set only table names, without database name
// Example: ['table1', 'table2']
// http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_ignore-table
'ignoreTables' => [],
// using ssl to connect to your database - active ssl-support (mysql only):
'ssl'=>false,
// add additional options to dump-command (like '--max-allowed-packet')
'extraParams'=>null,
],
'production' => [
'type' => 'postgresql',
'host' => 'localhost',
'port' => '5432',
'user' => 'postgres',
'pass' => 'password',
'database' => 'test',
],
Configure your filesystems.
// config/storage.php
'local' => [
'type' => 'Local',
'root' => '/path/to/working/directory',
],
's3' => [
'type' => 'AwsS3',
'key' => '',
'secret' => '',
'region' => 'us-east-1',
'version' => 'latest',
'bucket' => '',
'root' => '',
'use_path_style_endpoint' => false,
],
'b2' => [
'type' => 'B2',
'key' => '',
'accountId' => '',
'bucket' => '',
],
'gcs' => [
'type' => 'Gcs',
'key' => '',
'secret' => '',
'version' => 'latest',
'bucket' => '',
'root' => '',
],
'rackspace' => [
'type' => 'Rackspace',
'username' => '',
'key' => '',
'container' => '',
'zone' => '',
'root' => '',
],
'dropbox' => [
'type' => 'DropboxV2',
'token' => '',
'key' => '',
'secret' => '',
'app' => '',
'root' => '',
],
'ftp' => [
'type' => 'Ftp',
'host' => '',
'username' => '',
'password' => '',
'root' => '',
'port' => 21,
'passive' => true,
'ssl' => true,
'timeout' => 30,
],
'sftp' => [
'type' => 'Sftp',
'host' => '',
'username' => '',
'password' => '',
'root' => '',
'port' => 21,
'timeout' => 10,
'privateKey' => '',
],
'flysystem' => [
'type' => 'Flysystem',
'name' => 's3_backup',
//'prefix' => 'upload',
],
'doSpaces' => [
'type' => 'AwsS3',
'key' => '',
'secret' => '',
'region' => '',
'bucket' => '',
'root' => '',
'endpoint' => '',
'use_path_style_endpoint' => false,
],
'webdav' => [
'type' => 'Webdav',
'baseUri' => 'http://myserver.com',
'userName' => '',
'password' => '',
'prefix' => '',
],
Backup to / restore from any configured database.
Backup the development database to Amazon S3
. The S3 backup path will be test/backup.sql.gz
in the end, when gzip
is done with it.
use BackupManager\Filesystems\Destination;
$manager = require 'bootstrap.php';
$manager->makeBackup()->run('development', [new Destination('s3', 'test/backup.sql')], 'gzip');
Backup to / restore from any configured filesystem.
Restore the database file test/backup.sql.gz
from Amazon S3
to the development
database.
$manager = require 'bootstrap.php';
$manager->makeRestore()->run('s3', 'test/backup.sql.gz', 'development', 'gzip');
This package does not allow you to backup from one database type and restore to another. A MySQL dump is not compatible with PostgreSQL.
mysqldump
and mysql
command-line binariespg_dump
and psql
command-line binariesgzip
and gunzip
command-line binariesComposer
Run the following to include this via Composer
composer require backup-manager/backup-manager
Then, you'll need to select the appropriate packages for the adapters that you want to use.
# to support s3
composer require league/flysystem-aws-s3-v3
# to support b2
composer require mhetreramesh/flysystem-backblaze
# to support google cs
composer require league/flysystem-aws-s3-v2
# to install the preferred dropbox v2 driver
composer required spatie/flysystem-dropbox
# to install legacy dropbox v2 driver
composer require srmklive/flysystem-dropbox-v2
# to support rackspace
composer require league/flysystem-rackspace
# to support sftp
composer require league/flysystem-sftp
# to support webdav (supported by owncloud nad many other)
composer require league/flysystem-webdav
Once installed, the package must be bootstrapped (initial configuration) before it can be used.
We've provided a native PHP example here.
The required bootstrapping can be found in the example here.
We recommend using the vagrant configuration supplied with this package for development and contribution. Simply install VirtualBox, Vagrant, and Ansible then run vagrant up
in the root folder. A virtualmachine specifically designed for development of the package will be built and launched for you.
When contributing please consider the following guidelines:
Interface
, Traits should NOT be suffixed with Trait
.This package is maintained by Shawn McCool and you!
Remove support for symfony 2. Specifically symfony/process versions < 3.x
Author: Backup-manager
Source Code: https://github.com/backup-manager/backup-manager
License: MIT License
1650169320
Serverless plugin DynamoDB backups
If you want to automate your AWS DynamoDB database backups, this plugin may be what you need.
As we build various services on AWS using the "serverless" design, we need reusable backups services, both scalable and easy to implement. We therefore created this plugin, to make sure that each project can create its own DynamoDB automated backup solution.
This is a plugin which simplifies DynamoDB backups creation automation for all the resources created in serverless.yml
when using the Serverless Framework and AWS Cloud provider.
This plugin officially supports Node.js 12.x, 14.x and 16.x.
This plugin officially supports Serverless Framework >=2.0.0
.
serverless.yml
)Install the plugin using either Yarn or NPM. (we use Yarn)
NPM:
npm install @unly/serverless-plugin-dynamodb-backups
YARN:
yarn add @unly/serverless-plugin-dynamodb-backups
The plugin determines your environment during deployment and adds all environment variables to your Lambda function. All you need to do is to load the plugin:
Must be declared before
serverless-webpack
, despite what their officially doc says
plugins:
- '@unly/serverless-plugin-dynamodb-backups' # Must be first, even before "serverless-webpack", see https://github.com/UnlyEd/serverless-plugin-dynamodb-backups
- serverless-webpack # Must be second, see https://github.com/99xt/serverless-dynamodb-local#using-with-serverless-offline-and-serverless-webpack-plugin
Create a file, which will be called when performing a DynamoDB backup (we named it src/backups.js
in our examples
folder):
import dynamodbAutoBackups from '@unly/serverless-plugin-dynamodb-backups/lib';
export const handler = dynamodbAutoBackups;
serverless.yml
Set the dynamodbAutoBackups
object configuration as follows (list of all available options below):
custom:
dynamodbAutoBackups:
backupRate: rate(40 minutes) # Every 5 minutes, from the time it was deployed
source: src/backups.js # Path to the handler function we created in step #2
active: true
dynamodbAutoBackups
object:Attributes | Type | Required | Default | Description |
---|---|---|---|---|
source | String | True | Path to your handler function | |
backupRate | String | True | The schedule on which you want to backup your table. You can use either rate syntax (rate(1 hour) ) or cron syntax (cron(0 12 * * ? *) ). See here for more details on configuration. | |
name | String | False | auto | Automatically set, but you could provide your own name for this lambda |
slackWebhook | String | False | A HTTPS endpoint for an incoming webhook to Slack. If provided, it will send error messages to a Slack channel. | |
backupRemovalEnabled | Boolean | False | false | Enables cleanup of old backups. See the below option "backupRetentionDays" to specify the retention period. By default, backup removal is disabled. |
backupRetentionDays | Integer | False | Specify the number of days to retain old backups. For example, setting the value to 2 will remove all backups that are older than 2 days. Required if backupRemovalEnabled is true . | |
backupType | String | False | "ALL" | * USER - On-demand backup created by you. * SYSTEM - On-demand backup automatically created by DynamoDB. * ALL - All types of on-demand backups (USER and SYSTEM). |
active | Boolean | False | true | You can disable this plugin, useful to disable the plugin on a non-production environment, for instance |
Generated by https://www.tablesgenerator.com/markdown_tables
custom:
dynamodbAutoBackups:
backupRate: rate(40 minutes) # Every 40 minutes, from the time it was deployed
source: src/backups.js
slackWebhook: https://hooks.slack.com/services/T4XHXX5C6/TT3XXXM0J/XXXXXSbhCXXXX77mFBr0ySAm
backupRemovalEnabled: true # Enable backupRetentionDays
backupRetentionDays: 15 # If backupRemovalEnabled is not provided, then backupRetentionDays is not used
custom:
dynamodbAutoBackups:
backupRate: cron(0 2 ? * FRI *) # Every friday at 2:00 am
source: src/backups.js
slackWebhook: https://hooks.slack.com/services/T4XHXX5C6/TT3XXXM0J/XXXXXSbhCXXXX77mFBr0ySAm
backupRemovalEnabled: true # Enable backupRetentionDays
backupRetentionDays: 3 # If backupRemovalEnabled is not provided, then backupRetentionDays is not used
backupType: USER # Delete all backups created by a user, not the system backups
To test this plugin, you can clone this repository. Go to examples/serverless-example
, and follow the README.
Vulnerability disclosure
Contributors and maintainers
This project is being maintained by:
Thanks to our contributors:
Author: UnlyEd
Source Code: https://github.com/UnlyEd/serverless-plugin-dynamodb-backups
License: MIT License