Bongani  Ngema

Bongani Ngema

1673652360

Take Back-up and Restore F5 BIG-IP Configuration Files

What is Load-balancer?

A load balancer is a device that acts as a reverse proxy and distributes the network across a number of servers. It is used to increase the capacity and reliability of applications. Load balancer helps in dividing the user requests among multiple servers, which results in a better user experience.

Load balancer is grouped into two categories:

  • Layer 4

It works upon data found in network and transport layer protocols like IP, TCP, FTP, UDP, etc.

  • Layer 7

It distributes the requests based on data found in application layer protocols such as HTTP.

Some industry algorithms that are followed by load balancers are:

  • Round robin
  • Least response time
  • Weighted round-robin
  • Least connections

What is F5 BIG-IP ?

F5 BIG-IP is similar to a Load balancer which allows you to inspect and encrypt all the traffic passing through your network. It manages and balances the traffic load across a group or cloud of physical host servers by creating a virtual server with a single virtual IP address.

Taking the backup of configuration files

The configuration file of F5 BIG-IP contains all the information of the nodes, pool list, IPs, virtual server details, etc. So it’s very important to take the backup of this configuration. We can take a backup of configuration files of F5 BIG-IP by two methods:-

  • Command Line Interface 
  • Configuration Utility (GUI)

Command Line Interface 

Log in to the virtual server of your F5 BIG-IP and open the command line tool. Navigate yourself to the UCS folder which is located in the var/local/ucs as shown below. The UCS file contains all the files you need to restore your current configuration to a new or existing system.

The system displays the content of the directory. Give the name and save the UCS archive file in the default directory(as shown above). Type the command as shown below:-

This way we can create the backup and keep that file in another server or S3, where you would like to store it.

Configuration Utility (GUI)

Log in to the Configuration utility

Click on systems >> Archives. Then archive page displays. Then select create and enter the name (Make sure the name should be unique) as shown below:-

  • (Optional) In Encryption click on disable and click on finished or enable the encryption and enter the Passphrase.
  • Optional) If you want to exclude SSL private keys from the UCS archive, select Exclude from the Private Keys menu.

The operation status page will appear and then click on OK. As shown below:-

Then it will display the file which you have created and Copy the .ucs file to another system. For more information, you can also visit the official documentation.

Original article source at: https://blog.knoldus.com/

#files #configuration #backup #devops 

Take Back-up and Restore F5 BIG-IP Configuration Files
Gordon  Matlala

Gordon Matlala

1671716040

Backup and Restore Elasticsearch using Snapshots

Introduction

Hello everyone! Today in this blog, we will learn how to backup and restore Elasticsearch using snapshots. Before diving in, let’s first brush up on the basics of the topic.

Elasticsearch at a glance

  • It is a search and analytics engine
  • It is based on NoSQL technology
  • It exposes REST API instead of CLI to perform various operations
  • It is a combination of different nodes such as data, master, ingest, and client connected together.

Backup strategy at Elasticsearch

  • Elasticsearch uses snapshots
  • A snapshot is a backup taken from a running Elasticsearch cluster
  • Repositories are used to store snapshots
  • You must register a repository before you perform snapshot and restore operations
  • Repositories can be either local or remote
  • Different types of repositories supported by Elasticsearch are as follows:
    • Windows shares using Microsoft UNC path
    • NFS on Linux
    • Directory on Single Node Cluster
    • AWS
    • Azure Cloud
    • HPFS for Hadoop

Demo

  • First, we should have elasticsearch up and running. To check the status, use the command –
    • sudo systemctl status elasticsearch

You should be seeing the following output –

elasticsearch

  • Next, we’ll make a directory where we’ll be storing all our snapshots.
    • mkdir elasticsearch-backup
  • We need to make sure that the service elasticsearch can write into this directory. To give write permissions to the directory, use the command –
    • sudo chown -R elasticsearch:elasticsearch elasticsearch-backup
  • We need to give the path of our directory to elasticsearch. So, we need to make these changes in the /etc/elasticsearch/elasticsearch.yml file.

addpath

  • Restart the service using the following command –
    • sudo systemctl restart elasticsearch.service
  • Now, we need to create the repository. Use the following command to create the repo –

snapshots

snapshots

indices

snapshots

snapshots

Restore

As now we have successfully taken the backup of our indices, let us just make sure if we’re able to retrieve the data if it gets lost. So, let us first delete our data using the following command –

Now, if you’ll check, all the data must have been gone. So, let us try to restore our data using the snapshots we created.

  • curl -XGET ‘http:localhost:9200/_snapshot/elasticsearch-backup/first-snapshot/_restore?wait_for_completion=true’

The above command will successfully restore all the lost or deleted data.

That’s it for now. I hope this article was useful to you. Please feel free to drop any comments, questions, or suggestions.

Original article source at: https://blog.knoldus.com/

#backup #elasticsearch #snapshot 

Backup and Restore Elasticsearch using Snapshots
Rupert  Beatty

Rupert Beatty

1668153960

RsyncOSX: A MacOS GUI for Rsync. Compiled For MacOS Big Sur

Hi there 👋

RsyncOSX and RsyncUI are GUI´s on the Apple macOS plattform for the command line tool rsync.

It is rsync which executes the synchronize task. The GUI´s are only for setting parameters and make it more easy to use rsync, which is a fantastic tool.

The UI of RsyncOSX and RsyncUI can for users who dont know rsync be difficult to understand. Setting wrong parameters to rsync can result in deleted data. RsyncOSX nor RsyncUI will not stop you for doing so. That is why it is very important to execute a simulated run, a --dry-run, and verify the result before the real run.

If you have installed macOS Big Sur, RsyncOSX is the GUI for you. If you have installed macOS Monterey or macOS Ventura, you can use both GUI´s in parallell.

Please be aware it is an external task not controlled by RsyncOSX which executes the command line tool rsync. RsyncOSX is monitoring the task for progress and termination. The user can abort a tasks at any time. Please let the abort to finish and cleanup properly before starting a new task. It might take a few seconds. If not the apps might become unresponsive.

One of many advantages of utilizing rsync is that it can restart and continue the synchronize task from where it was aborted.

RsyncOSX is the only GUI which supports scheduling of task.

RsyncOSX is released for macOS Big Sur and later due to requirements in some features of Combine. Latest build is 8 September 2022.

RsyncUI is released for macOS Monterey and later.

Latest build is 5 November 2022.

My github stats

Metrics

Download Details:

Author: rsyncOSX
Source Code: https://github.com/rsyncOSX/RsyncOSX 
License: MIT license

#swift #backup #xcode 

RsyncOSX: A MacOS GUI for Rsync. Compiled For MacOS Big Sur

Laravel-backup: A Package to Backup Your Laravel App

Laravel-backup

A modern backup solution for Laravel apps    

This Laravel package creates a backup of your application. The backup is a zip file that contains all files in the directories you specify along with a dump of your database. The backup can be stored on any of the filesystems you have configured in Laravel.

Feeling paranoid about backups? No problem! You can backup your application to multiple filesystems at once.

Once installed taking a backup of your files and databases is very easy. Just issue this artisan command:

php artisan backup:run

But we didn't stop there. The package also provides a backup monitor to check the health of your backups. You can be notified via several channels when a problem with one of your backups is found. To avoid using excessive disk space, the package can also clean up old backups.

Installation and usage

This package requires PHP 8.0 and Laravel 8.0 or higher. You'll find installation instructions and full documentation on https://spatie.be/docs/laravel-backup.

Using an older version of PHP / Laravel?

If you are on a PHP version below 8.0 or a Laravel version below 8.0 just use an older version of this package.

Read the extensive documentation on version 3, on version 4, on version 5 and on version 6. We won't introduce new features to v6 and below anymore but we will still fix bugs.

Testing

Run the tests with:

composer test

Changelog

Please see CHANGELOG for more information on what has changed recently.

Contributing

Please see CONTRIBUTING for details.

Security

If you discover any security-related issues, please email security@spatie.be instead of using the issue tracker.

Postcardware

You're free to use this package, but if it makes it to your production environment we highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using.

Our address is: Spatie, Kruikstraat 22, 2018 Antwerp, Belgium.

We publish all received postcards on our company website.

Credits

And a special thanks to Caneco for the logo ✨

Download Details:

Author: Spatie
Source Code: https://github.com/spatie/laravel-backup 
License: MIT license

#php #devops #laravel #backup 

Laravel-backup: A Package to Backup Your Laravel App

All-in-one: Nextcloud All in one

Nextcloud All In One

Nextcloud AIO stands for Nextcloud All In One and provides easy deployment and maintenance with most features included in this one Nextcloud instance.

Included are:

  • Nextcloud
  • Nextcloud Office
  • High performance backend for Nextcloud Files
  • High performance backend for Nextcloud Talk
  • Backup solution (based on BorgBackup)
  • Imaginary
  • ClamAV
  • Fulltextsearch

How to use this?

The following instructions are especially meant for Linux. For macOS see this, for Windows see this.

Install Docker on your Linux installation using:

curl -fsSL get.docker.com | sudo sh

If you need ipv6 support, you should enable it by following https://docs.docker.com/config/daemon/ipv6/.

Run the command below in order to start the container:

(For people that cannot use ports 80 and/or 443 on this server, please follow the reverse proxy documentation because port 443 is used by this project and opened on the host by default even though it does not look like this is the case. Otherwise please run the command below!)

# For x64 CPUs:
sudo docker run \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 80:80 \
--publish 8080:8080 \
--publish 8443:8443 \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest

Command for arm64 CPUs like the Raspberry Pi 4

# For arm64 CPUs:
sudo docker run \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 80:80 \
--publish 8080:8080 \
--publish 8443:8443 \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest-arm64

After the initial startup, you should be able to open the Nextcloud AIO Interface now on port 8080 of this server.
E.g. https://ip.address.of.this.server:8080

If your firewall/router has port 80 and 8443 open and you point a domain to your server, you can get a valid certificate automatically by opening the Nextcloud AIO Interface via:
https://your-domain-that-points-to-this-server.tld:8443

Please do not forget to open port 3478/TCP and 3478/UDP in your firewall/router for the Talk container!

FAQ

How does it work?

Nextcloud AIO is inspired by projects like Portainer that manage the docker daemon by talking to it through the docker socket directly. This concept allows a user to install only one container with a single command that does the heavy lifting of creating and managing all containers that are needed in order to provide a Nextcloud installation with most features included. It also makes updating a breeze and is not bound to the host system (and its slow updates) anymore as everything is in containers. Additionally, it is very easy to handle from a user perspective because a simple interface for managing your Nextcloud AIO installation is provided.

Are reverse proxies supported?

Yes. Please refer to the following documentation on this: reverse-proxy.md

Which ports are mandatory to be open in your firewall/router?

Only those (if you access the Mastercontainer Interface internally via port 8080):

  • 443/TCP for the Apache container
  • 3478/TCP and 3478/UDP for the Talk container

Explanation of used ports:

  • 8080/TCP: Mastercontainer Interface with self-signed certificate (works always, also if only access via IP-address is possible, e.g. https://ip.address.of.this.server:8080/)
  • 80/TCP: redirects to Nextcloud (is used for getting the certificate via ACME http-challenge for the Mastercontainer)
  • 8443/TCP: Mastercontainer Interface with valid certificate (only works if port 80 and 8443 are open in your firewall/router and you point a domain to your server. It generates a valid certificate then automatically and access via e.g. https://public.domain.com:8443/ is possible.)
  • 443/TCP: will be used by the Apache container later on and needs to be open in your firewall/router
  • 3478/TCP and 3478/UDP: will be used by the Turnserver inside the Talk container and needs to be open in your firewall/router

How to run AIO on macOS?

On macOS, there are two things different in comparison to Linux: instead of using --volume /var/run/docker.sock:/var/run/docker.sock:ro, you need to use --volume /var/run/docker.sock.raw:/var/run/docker.sock:ro to run it after you installed Docker Desktop. You also need to add -e DOCKER_SOCKET_PATH="/var/run/docker.sock.raw"to the startup command. Apart from that it should work and behave the same like on Linux.

How to run AIO on Windows?

On Windows, the following command should work in the command prompt after you installed Docker Desktop:

docker run ^
--sig-proxy=false ^
--name nextcloud-aio-mastercontainer ^
--restart always ^
--publish 80:80 ^
--publish 8080:8080 ^
--publish 8443:8443 ^
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config ^
--volume //var/run/docker.sock:/var/run/docker.sock:ro ^
nextcloud/all-in-one:latest

Please note: In order to make the built-in backup solution able to back up to the host system, you need to create a volume with the name nextcloud_aio_backupdir beforehand:

docker volume create ^
--driver local ^
--name nextcloud_aio_backupdir ^
-o device="/host_mnt/c/your/backup/path" ^
-o type="none" ^
-o o="bind"

(The value /host_mnt/c/your/backup/path in this example would be equivalent to C:\your\backup\path on the Windows host. So you need to translate the path that you want to use into the correct format.) ⚠️️ Attention: Make sure that the path exists on the host before you create the volume! Otherwise everything will bug out!

Also, you may be interested in adjusting Nextcloud's Datadir to store the files on the host system. See this documentation on how to do it.

How to run AIO with Portainer?

The easiest way to run it with Portainer on Linux is to use Portainer's stacks feature and use this docker-compose file in order to start AIO correctly.

How to run Nextcloud behind a Cloudflare Argo Tunnel?

Although it does not seems like it is the case but from AIO perspective a Cloudflare Argo Tunnel works like a reverse proxy. So please follow the reverse proxy documentation where is documented how to make it run behind a Cloudflare Argo Tunnel.

How to get Nextcloud running using the ACME DNS-challenge?

You can install AIO in reverse proxy mode where is also documented how to get it running using the ACME DNS-challenge for getting a valid certificate for AIO. See the reverse proxy documentation. (Meant is the Caddy with ACME DNS-challenge section).

How to run Nextcloud locally?

If you do not want to open Nextcloud to the public internet, you may have a look at the following documentation how to set it up locally: local-instance.md

Are self-signed certificates supported for Nextcloud?

No and they will not be. If you want to run it locally, without opening Nextcloud to the public internet, please have a look at the local instance documentation.

Can I use an ip-address for Nextcloud instead of a domain?

No and it will not be added. If you only want to run it locally, you may have a look at the following documentation: local-instance.md

Are other ports than then default 443 for Nextcloud supported?

No and they will not be. Please use a dedicated domain for Nextcloud and set it up correctly by following the reverse proxy documentation. If port 443 and/or 80 is blocked for you, you may use the ACME DNS-challenge or a Cloudflare Argo Tunnel.

Can I run Nextcloud in a subdirectory on my domain?

No and it will not be added. Please use a dedicated domain for Nextcloud and set it up correctly by following the reverse proxy documentation.

How can I access Nextcloud locally?

The recommended way is to set up a local dns-server like a pi-hole and set up a custom dns-record for that domain that points to the internal ip-adddress of your server that runs Nextcloud AIO.

How to skip the domain validation?

If you are completely sure that you've configured everything correctly and are not able to pass the domain validation, you may skip the domain validation by adding -e SKIP_DOMAIN_VALIDATION=true to the docker run command of the mastercontainer.

How to resolve firewall problems with Fedora Linux, RHEL OS, CentOS, SUSE Linux and others?

It is known that Linux distros that use firewalld as their firewall daemon have problems with docker networks. In case the containers are not able to communicate with each other, you may change your firewalld to use the iptables backend by running:

sudo sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/g' /etc/firewalld/firewalld.conf
sudo systemctl restart firewalld docker

Afterwards it should work.
 

See https://dev.to/ozorest/fedora-32-how-to-solve-docker-internal-network-issue-22me for more details on this. This limitation is even mentioned on the official firewalld website: https://firewalld.org/#who-is-using-it

How to run occ commands?

Simply run the following: sudo docker exec -it nextcloud-aio-nextcloud php occ your-command. Of course your-command needs to be exchanged with the command that you want to run.

How to resolve Security & setup warnings displays the "missing default phone region" after initial install?

Simply run the following command: sudo docker exec -it nextcloud-aio-nextcloud php occ config:system:set default_phone_region --value="yourvalue". Of course you need to modify yourvalue based on your location. Examples are DE, EN and GB. See this list for more codes: https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Officially_assigned_code_elements

How to run multiple AIO instances on one server?

See multiple-instances.md for some documentation on this.

Bruteforce protection FAQ

Nextcloud features a built-in bruteforce protection which may get triggered and will block an ip-address or disable a user. You can unblock an ip-address by running sudo docker exec -it nextcloud-aio-nextcloud php occ security:bruteforce:reset <ip-address> and enable a disabled user by running sudo docker exec -it nextcloud-aio-nextcloud php occ user:enable <name of user>. See https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#security for further information.

Update policy

This project values stability over new features. That means that when a new major Nextcloud update gets introduced, we will wait at least until the first patch release, e.g. 24.0.1 is out before upgrading to it. Also we will wait with the upgrade until all important apps are compatible with the new major version. Minor or patch releases for Nextcloud and all dependencies as well as all containers will be updated to new versions as soon as possible but we try to give all updates first a good test round before pushing them. That means that it can take around 2 weeks before new updates reach the latest channel. If you want to help testing, you can switch to the beta channel by following this documentation which will also give you the updates earlier.

How to switch the channel?

You can switch to a different channel like e.g. the beta channel or from the beta channel back to the latest channel by stopping the mastercontainer, removing it (no data will be lost) and recreating the container using the same command that you used initially to create the mastercontainer. For the beta channel on x64 you need to change the last line nextcloud/all-in-one:latest to nextcloud/all-in-one:beta and vice versa. For arm64 it is nextcloud/all-in-one:latest-arm64 and nextcloud/all-in-one:beta-arm64, respectively.

How to update the containers?

If we push new containers to latest, you will see in the AIO interface below the containers section that new container updates were found. In this case, just press Stop containers and Start containers in order to update the containers. The mastercontainer has its own update procedure though. See below. And don't forget to back up the current state of your instance using the built-in backup solution before starting the containers again! Otherwise you won't be able to restore your instance easily if something should break during the update.

If a new Mastercontainer update was found, you'll see an additional section below the containers section which shows that a mastercontainer update is available. If so, you can simply press on the button to update the container.

Additionally, there is a cronjob that runs once a day that checks for container and mastercontainer updates and sends a notification to all Nextcloud admins if a new update was found.

How to easily log in to the AIO interface?

If your Nextcloud is running and you are logged in as admin in your Nextcloud, you can easily log in to the AIO interface by opening https://yourdomain.tld/settings/admin/overview which will show a button on top that enables you to log in to the AIO interface by just clicking on this button. Note: You can change the domain/ip-address/port of the button by simply stopping the containers, visiting the AIO interface from the correct and desired domain/ip-address/port and clicking once on Start containers.

How to change the domain?

⚠️ Please note: Editing the configuration.json manually and making a mistake may break your instance so please create a backup first!

If you set up a new AIO instance, you need to enter a domain. Currently there is no way to change this domain afterwards from the AIO interface. So in order to change it, you need to edit the configuration.json manually that is most likely stored in /var/lib/docker/volumes/nextcloud_aio_mastercontainer/_data/data/configuration.json, subsitute each occurrence of your old domain with your new domain and save and write out the file. Afterwards restart your containers from the AIO interface and everything should work as expected if the new domain is correctly configured.
If you are running AIO behind a reverse proxy, you need to obviously also change the domain in your reverse proxy config.

How to properly reset the instance?

If something goes unexpected routes during the initial installation, you might want to reset the AIO installation to be able to start from scratch.

Please note: if you already have it running and have data on your instance, you should not follow these instructions as it will delete all data that is coupled to your AIO instance.

Here is how to reset the AIO instance properly:

  1. Stop all containers if they are running from the AIO interface
  2. Stop the mastercontainer with sudo docker stop nextcloud-aio-mastercontainer
  3. If the domaincheck container is still running, stop it with sudo docker stop nextcloud-aio-domaincheck
  4. Check which containers are stopped: sudo docker ps --filter "status=exited"
  5. Now remove all these stopped containers with sudo docker container prune
  6. Delete the docker network with sudo docker network rm nextcloud-aio
  7. Check which volumes are dangling with sudo docker volume ls --filter "dangling=true"
  8. Now remove all these dangling volumes: sudo docker volume prune (on Windows you might need to remove some volumes afterwards manually with docker volume rm nextcloud_aio_backupdir, docker volume rm nextcloud_aio_nextcloud_datadir). Also if you've configured NEXTCLOUD_DATADIR to a path on your host instead of the default volume, you need to clean that up as well.
  9. Optional: You can remove all docker images with sudo docker image prune -a.
  10. And you are done! Now feel free to start over with the recommended docker run command!

Backup solution

Nextcloud AIO provides a local backup solution based on BorgBackup. These backups act as a local restore point in case the installation gets corrupted.

It is recommended to create a backup before any container update. By doing this, you will be safe regarding any possible complication during updates because you will be able to restore the whole instance with basically one click.

If you connect an external drive to your host, and choose the backup directory to be on that drive, you are also kind of safe against drive failures of the drive where the docker volumes are stored on.

How to do the above step for step

  1. Mount an external/backup HDD to the host OS using the built-in functionality or udev rules or whatever way you prefer. (E.g. follow this video: https://www.youtube.com/watch?v=2lSyX4D3v_s) and mount the drive in best case in /mnt/backup.
  2. If not already done, fire up the docker container and set up Nextcloud as per the guide.
  3. Now open the AIO interface.
  4. Under backup section, add your external disk mountpoint as backup directory, e.g. /mnt/backup.
  5. Click on Create Backup which should create the first backup on the external disk.

Backups can be created and restored in the AIO interface using the buttons Create Backup and Restore selected backup. Additionally, a backup check is provided that checks the integrity of your backups but it shouldn't be needed in most situations.

The backups itself get encrypted with an encryption key that gets shown to you in the AIO interface. Please save that at a safe place as you will not be able to restore from backup without this key.

Be aware that this solution does not back up files and folders that are mounted into Nextcloud using the external storage app.

Note that this implementation does not provide remote backups, for this you can use the backup app.

Failure of the backup container in LXC containers

If you are running AIO in a LXC container, you need to make sure that FUSE is enabled in the LXC container settings. Otherwise the backup container will not be able to start as FUSE is required for it to work.

Pro-tip: Backup archives access

You can open the BorgBackup archives on your host by following these steps:
(instructions for Ubuntu Desktop)

# Install borgbackup on the host
sudo apt update && sudo apt install borgbackup

# Mount the archives to /tmp/borg (if you are using the default backup location /mnt/backup/borg)
sudo mkdir -p /tmp/borg && sudo borg mount "/mnt/backup/borg" /tmp/borg

# After entering your repository key successfully, you should be able to access all archives in /tmp/borg
# You can now do whatever you want by syncing them to a different place using rsync or doing other things
# E.g. you can open the file manager on that location by running:
xhost +si:localuser:root && sudo nautilus /tmp/borg

# When you are done, simply close the file manager and run the following command to unmount the backup archives:
sudo umount /tmp/borg

Delete backup archives manually

You can delete BorgBackup archives on your host manually by following these steps:
(instructions for Debian based OS' like Ubuntu)

# Install borgbackup on the host
sudo apt update && sudo apt install borgbackup

# List all archives (if you are using the default backup location /mnt/backup/borg)
sudo borg list "/mnt/backup/borg"

# After entering your repository key successfully, you should now see a list of all backup archives
# An example backup archive might be called 20220223_174237-nextcloud-aio
# Then you can simply delete the archive with:
sudo borg delete --stats --progress "/mnt/backup/borg::20220223_174237-nextcloud-aio"

After doing so, make sure to update the backup archives list in the AIO interface!
You can do so by clicking on the Check backup integrity button or Create backup button.


Sync the backup regularly to another drive

For increased backup security, you might consider syncing the backup repository regularly to another drive.

To do that, first add the drive to /etc/fstab so that it is able to get automatically mounted and then create a script that does all the things automatically. Here is an example for such a script:

Click here to expand

#!/bin/bash

# Please modify all variables below to your needings:
SOURCE_DIRECTORY="/mnt/backup/borg"
DRIVE_MOUNTPOINT="/mnt/backup-drive"
TARGET_DIRECTORY="/mnt/backup-drive/borg"

########################################
# Please do NOT modify anything below! #
########################################

if [ "$EUID" -ne 0 ]; then 
    echo "Please run as root"
    exit 1
fi

if ! [ -d "$SOURCE_DIRECTORY" ]; then
    echo "The source directory does not exist."
    exit 1
fi

if [ -z "$(ls -A "$SOURCE_DIRECTORY/")" ]; then
    echo "The source directory is empty which is not allowed."
    exit 1
fi

if ! [ -d "$DRIVE_MOUNTPOINT" ]; then
    echo "The drive mountpoint must be an existing directory"
    exit 1
fi

if ! grep -q " $DRIVE_MOUNTPOINT " /etc/fstab; then
    echo "Could not find the drive mountpoint in the fstab file. Did you add it there?"
    exit 1
fi

if ! mountpoint -q "$DRIVE_MOUNTPOINT"; then
    mount "$DRIVE_MOUNTPOINT"
    if ! mountpoint -q "$DRIVE_MOUNTPOINT"; then
        echo "Could not mount the drive. Is it connected?"
        exit 1
    fi
fi

if [ -f "$SOURCE_DIRECTORY/lock.roster" ]; then
    echo "Cannot run the script as the backup archive is currently changed. Please try again later."
    exit 1
fi

mkdir -p "$TARGET_DIRECTORY"
if ! [ -d "$TARGET_DIRECTORY" ]; then
    echo "Could not create target directory"
    exit 1
fi

if [ -f "$SOURCE_DIRECTORY/aio-lockfile" ]; then
    echo "Not continuing because aio-lockfile already exists."
    exit 1
fi

touch "$SOURCE_DIRECTORY/aio-lockfile"

if ! rsync --stats --archive --human-readable --delete "$SOURCE_DIRECTORY/" "$TARGET_DIRECTORY"; then
    echo "Failed to sync the backup repository to the target directory."
    exit 1
fi

rm "$SOURCE_DIRECTORY/aio-lockfile"
rm "$TARGET_DIRECTORY/aio-lockfile"

umount "$DRIVE_MOUNTPOINT"

if docker ps --format "{{.Names}}" | grep "^nextcloud-aio-nextcloud$"; then
    docker exec -it nextcloud-aio-nextcloud bash /notify.sh "Rsync backup successful!" "Synced the backup repository successfully."
else
    echo "Synced the backup repository successfully."
fi

You can simply copy and past the script into a file e.g. named backup-script.sh e.g. here: /root/backup-script.sh. Do not forget to modify the variables to your requirements!

Afterwards apply the correct permissions with sudo chown root:root /root/backup-script.sh and sudo chmod 700 /root/backup-script.sh. Then you can create a cronjob that runs e.g. at 20:00 each week on Sundays like this:

  1. Open the cronjob with sudo crontab -u root -e (and choose your editor of choice if not already done. I'd recommend nano).
  2. Add the following new line to the crontab if not already present: 0 20 * * 7 /root/backup-script.sh which will run the script at 20:00 on Sundays each week.
  3. save and close the crontab (when using nano are the shortcuts for this Ctrl + o -> Enter and close the editor with Ctrl + x).

How to stop/start/update containers or trigger the daily backup from a script externally?

You can do so by running the /daily-backup.sh script that is stored in the mastercontainer. It accepts the following environmental varilables:

  • AUTOMATIC_UPDATES if set to 1, it will automatically stop the containers, update them and start them including the mastercontainer. If the mastercontainer gets updated, this script's execution will stop as soon as the mastercontainer gets stopped. You can then wait until it is started again and run the script with this flag again in order to update all containers correctly afterwards.
  • DAILY_BACKUP if set to 1, it will automatically stop the containers and create a backup. If you want to start them again afterwards, you may have a look at the START_CONTAINERS option. Please be aware that this option is non-blocking if START_CONTAINERS and AUTOMATIC_UPDATES is not enabled at the same time which means that the backup check is not done when the process is finished since it only start the borgbackup container with the correct configuration.
  • START_CONTAINERS if set to 1, it will automatically start the containers without updating them.
  • STOP_CONTAINERS if set to 1, it will automatically stop the containers.
  • CHECK_BACKUP if set to 1, it will start the backup check. This is not allowed to be enabled at the same time like DAILY_BACKUP. Please be aware that this option is non-blocking which means that the backup check is not done when the process is finished since it only start the borgbackup container with the correct configuration.

One example for this would be sudo docker exec -it -e DAILY_BACKUP=1 nextcloud-aio-mastercontainer /daily-backup.sh, which you can run via a cronjob or put it in a script.

⚠️ Please note that none of the option returns error codes. So you need to check for the correct result yourself.

How to disable the backup section?

If you already have a backup solution in place, you may want to hide the backup section. You can do so by adding -e DISABLE_BACKUP_SECTION=true to the initial startup of the mastercontainer.

How to change the default location of Nextcloud's Datadir?

⚠️ Attention: It is very important to change the datadir before Nextcloud is installed/started the first time and not to change it afterwards! If you still want to do it afterwards, see this on how to do it.

You can configure the Nextcloud container to use a specific directory on your host as data directory. You can do so by adding the environmental variable NEXTCLOUD_DATADIR to the initial startup of the mastercontainer. Allowed values for that variable are strings that start with / and are not equal to /.

  • An example for Linux is -e NEXTCLOUD_DATADIR="/mnt/ncdata".
  • On macOS it might be -e NEXTCLOUD_DATADIR="/var/nextcloud-data"
  • For Synology it may be -e NEXTCLOUD_DATADIR="/volume1/docker/nextcloud/data".
  • On Windows it must be -e NEXTCLOUD_DATADIR="nextcloud_aio_nextcloud_datadir". In order to use this, you need to create the nextcloud_aio_nextcloud_datadir volume beforehand:
docker volume create ^
--driver local ^
--name nextcloud_aio_nextcloud_datadir ^
-o device="/host_mnt/c/your/data/path" ^
-o type="none" ^
-o o="bind"
  • (The value /host_mnt/c/your/data/path in this example would be equivalent to C:\your\data\path on the Windows host. So you need to translate the path that you want to use into the correct format.) ⚠️️ Attention: Make sure that the path exists on the host before you create the volume! Otherwise everything will bug out!

⚠️ Please make sure to apply the correct permissions to the chosen directory before starting Nextcloud the first time (not needed on Windows).

  • In this example for Linux, the command for this would be sudo chown -R 33:0 /mnt/ncdata and sudo chmod -R 750 /mnt/ncdata.
  • On macOS, the command for this would be sudo chown -R 33:0 /var/nextcloud-data and sudo chmod -R 750 /var/nextcloud-data.
  • For Synology, the command for this example would be sudo chown -R 33:0 /volume1/docker/nextcloud/data and sudo chmod -R 750 /volume1/docker/nextcloud/data
  • On Windows, this command is not needed.

How to allow the Nextcloud container to access directories on the host?

By default, the Nextcloud container is confined and cannot access directories on the host OS. You might want to change this when you are planning to use local external storage in Nextcloud to store some files outside the data directory and can do so by adding the environmental variable NEXTCLOUD_MOUNT to the initial startup of the mastercontainer. Allowed values for that variable are strings that start with / and are not equal to /.

  • Two examples for Linux are -e NEXTCLOUD_MOUNT="/mnt/" and -e NEXTCLOUD_MOUNT="/media/".
  • For Synology it may be -e NEXTCLOUD_MOUNT="/volume1/".
  • On Windows is this option not supported.

After using this option, please make sure to apply the correct permissions to the directories that you want to use in Nextcloud. E.g. sudo chown -R 33:0 /mnt/your-drive-mountpoint and sudo chmod -R 750 /mnt/your-drive-mountpoint should make it work on Linux when you have used -e NEXTCLOUD_MOUNT="/mnt/".

You can then navigate to the apps management page, activate the external storage app, navigate to https://your-nc-domain.com/settings/admin/externalstorages and add a local external storage directory that will be accessible inside the container at the same place that you've entered. E.g. /mnt/your-drive-mountpoint will be mounted to /mnt/your-drive-mountpoint inside the container, etc.

Be aware though that these locations will not be covered by the built-in backup solution!

How to adjust the Talk port?

By default will the talk container use port 3478/UDP and 3478/TCP for connections. You can adjust the port by adding e.g. -e TALK_PORT=3478 to the initial docker run command and adjusting the port to your desired value.

How to adjust the upload limit for Nextcloud?

By default are uploads to Nextcloud limited to a max of 10G. You can adjust the upload limit by providing -e NEXTCLOUD_UPLOAD_LIMIT=10G to the docker run command of the mastercontainer and customize the value to your fitting. It must start with a number and end with G e.g. 10G.

How to adjust the max execution time for Nextcloud?

By default are uploads to Nextcloud limited to a max of 3600s. You can adjust the upload time limit by providing -e NEXTCLOUD_MAX_TIME=3600 to the docker run command of the mastercontainer and customize the value to your fitting. It must be a number e.g. 3600.

What can I do to fix the internal or reserved ip-address error?

If you get an error during the domain validation which states that your ip-address is an internal or reserved ip-address, you can fix this by first making sure that your domain indeed has the correct public ip-address that points to the server and then adding --add-host yourdomain.com:<public-ip-address> to the initial docker run command which will allow the domain validation to work correctly. And so that you know: even if the A record of your domain should change over time, this is no problem since the mastercontainer will not make any attempt to access the chosen domain after the initial domain validation.

How to run this with docker rootless?

You can run AIO also with docker rootless. How to do this is documented here: docker-rootless.md

Huge docker logs

When your containers run for a few days without a restart, the container logs that you can view from the AIO interface can get really huge. You can limit the loge sizes by enabling logrotate for docker container logs. Feel free to enable this by following those instructions: https://sandro-keil.de/blog/logrotate-for-docker-container/

Access/Edit Nextcloud files/folders manually

The files and folders that you add to Nextcloud are by default stored in the following directory: /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/ on the host. If needed, you can modify/add/delete files/folders there but ATTENTION: be very careful when doing so because you might corrupt your AIO installation! Best is to create a backup using the built-in backup solution before editing/changing files/folders in there because you will then be able to restore your instance to the backed up state.

After you are done modifying/adding/deleting files/folders, don't forget to apply the correct permissions by running: sudo chown -R 33:0 /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/* and sudo chmod -R 750 /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/* and rescan the files with sudo docker exec -it nextcloud-aio-nextcloud php occ files:scan --all.

How to store the files/installation on a separate drive?

You can move the whole docker library and all its files including all Nextcloud AIO files and folders to a separate drive by first mounting the drive in the host OS (NTFS is not supported) and then following this tutorial: https://www.guguweb.com/2019/02/07/how-to-move-docker-data-directory-to-another-location-on-ubuntu/
(Of course docker needs to be installed first for this to work.)

How to edit Nextclouds config.php file with a texteditor?

You can edit Nextclouds config.php file directly from the host with your favorite text editor. E.g. like this: sudo nano /var/lib/docker/volumes/nextcloud_aio_nextcloud/_data/config/config.php. Make sure to not break the file though which might corrupt your Nextcloud instance otherwise. In best case, create a backup using the built-in backup solution before editing the file.

Custom skeleton directory

If you want to define a custom skeleton directory, you can do so by putting your skeleton files into /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/skeleton/, applying the correct permissions with sudo chown -R 33:0 /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/skeleton and and sudo chmod -R 750 /var/lib/docker/volumes/nextcloud_aio_nextcloud_data/_data/* and setting the skeleton directory option with sudo docker exec -it nextcloud-aio-nextcloud php occ config:system:set skeletondirectory --value="/mnt/ncdata/skeleton". You can read further on this option here: click here

Fail2ban

You can configure your server to block certain ip-addresses using fail2ban as bruteforce protection. Here is how to set it up: https://docs.nextcloud.com/server/stable/admin_manual/installation/harden_server.html#setup-fail2ban. The logpath of AIO is by default /var/lib/docker/volumes/nextcloud_aio_nextcloud/_data/data/nextcloud.log. Do not forget to add chain=DOCKER-USER to your nextcloud jail config (nextcloud.local) otherwise the nextcloud service running on docker will still be accessible even if the IP is banned. Also, you may change the blocked ports to cover all AIO ports: by default 80,443,8080,8443,3478 (see this)

LDAP

It is possible to connect to an existing LDAP server. You need to make sure that the LDAP server is reachable from the Nextcloud container. Then you can enable the LDAP app and configure LDAP in Nextcloud manually. If you don't have a LDAP server yet, recommended is to use this docker container: https://hub.docker.com/r/nitnelave/lldap. Make sure here as well that Nextcloud can talk to the LDAP server. The easiest way is by adding the LDAP docker container to the docker network nextcloud-aio. Then you can connect to the LDAP container by its name from the Nextcloud container.

Netdata

Netdata allows you to monitor your server using a GUI. You can install it by following https://learn.netdata.cloud/docs/agent/packaging/docker#create-a-new-netdata-agent-container.

USER_SQL

If you want to use the user_sql app, the easiest way is to create an additional database container and add it to the docker network nextcloud-aio. Then the Nextcloud container should be able to talk to the database container using its name.

phpMyAdmin, Adminer or pgAdmin

It is possible to install any of these to get a GUI for your AIO database. The pgAdmin container is recommended. You can get some docs on it here: https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html. For the container to connect to the aio-database, you need to connect the container to the docker network nextcloud-aio and use nextcloud-aio-database as database host, oc_nextcloud as database username and the password that you get when running sudo grep dbpassword /var/lib/docker/volumes/nextcloud_aio_nextcloud/_data/config/config.php as the password.

Mail server

You can configure one yourself by using either of these three recommended projects: Docker Mailserver, Maddy Mail Server or Mailcow. Docker Mailserver and Maddy Mail Server are probably a bit easier to set up as it is possible to run them using only one container but Mailcow has much more features.

How to migrate from an already existing Nextcloud installation to Nextcloud AIO?

Please see the following documentation on this: migration.md

Requirements for integrating new containers

For integrating new containers, they must pass specific requirements for being considered to get integrated in AIO itself. Even if not considered, we may add some documentation on it.

What are the requirements?

  1. New containers must be related to Nextcloud. Related means that there must be a feature in Nextcloud that gets added by adding this container.
  2. It must be optionally installable. Disabling and enabling the container from the AIO interface must work and must not produce any unexpected side-effects.
  3. The feature that gets added into Nextcloud by adding the container must be maintained by the Nextcloud GmbH.
  4. It must be possible to run the container without big quirks inside docker containers. Big quirks means e.g. needing to change the capabilities or security options.
  5. The container should not mount directories from the host into the container: only docker volumes should be used.

How to trust user-defiend Certification Authorities (CA)?

For some applications it might be necessary to enstablish a secured connection to a host / server which is using a certificated issued by a Certification Authority that is not trusted out of the box. An example could be configuring LDAPS against the Domain Controller (ActiveDirectory) of an organization

You can make the Nextcloud container trust any Certification Authority by providing the environmental variable TRUSTED_CACERTS_DIR when starting the AIO-mastercontainer. The value of the variables should be set to the absolute path to a directory on the host, which contains one or more Certification Authority's certificate. You should use X.509 certificates, Base64 encoded. (Other formats may work but have not been tested!) All the certificates in the directory will be trusted.

When using docker run, the environmental variable can be set with -e TRUSTED_CACERTS_DIR=/path/to/my/cacerts.

In order for the value to be valid, the path should start with / and not end with '/' and point to an existing directory. Pointing the variable directly to a certificate file will not work and may also break things.

How to disable Collabora's Seccomp feature?

The Collabora container enables Seccomp by default, which is a security feature of the Linux kernel. On systems without this kernel feature enabled, you need to provide -e COLLABORA_SECCOMP_DISABLED=true to the initial docker run command in order to make it work.

Download Details:

Author: Nextcloud
Source Code: https://github.com/nextcloud/all-in-one 
License: AGPL-3.0 license

#php #docker #backup #restore 

All-in-one: Nextcloud All in one
Reid  Rohan

Reid Rohan

1661908980

A Node.js Package That Makes Syncing A MongoDB Database to S3 Simple

Node MongoDB / S3 Backup

This is a package that makes backing up your mongo databases to S3 simple. The binary file is a node cronjob that runs at midnight every day and backs up the database specified in the config file.

Installation

npm install mongodb_s3_backup -g

Configuration

To configure the backup, you need to pass the binary a JSON configuration file. There is a sample configuration file supplied in the package (config.sample.json). The file should have the following format:

{
  "mongodb": {
    "host": "localhost",
    "port": 27017,
    "username": false,
    "password": false,
    "db": "database_to_backup"
  },
  "s3": {
    "key": "your_s3_key",
    "secret": "your_s3_secret",
    "bucket": "s3_bucket_to_upload_to",
    "destination": "/",
    "encrypt": true,
    "region": "s3_region_to_use"
  },
  "cron": {
    "time": "11:59",
  }
}

All options in the "s3" object, except for desination, will be directly passed to knox, therefore, you can include any of the options listed in the knox documentation.

Crontabs

You may optionally substitute the cron "time" field with an explicit "crontab" of the standard format 0 0 * * *.

  "cron": {
    "crontab": "0 0 * * *"
  }

Note: The version of cron that we run supports a sixth digit (which is in seconds) if you need it.

Timezones

The optional "timezone" allows you to specify timezone-relative time regardless of local timezone on the host machine.

  "cron": {
    "time": "00:00",
    "timezone": "America/New_York"
  }

You must first npm install time to use "timezone" specification.

Running

To start a long-running process with scheduled cron job:

mongodb_s3_backup <path to config file>

To execute a backup immediately and exit:

mongodb_s3_backup -n <path to config file>

Download Details:

Author: Theycallmeswift
Source Code: https://github.com/theycallmeswift/node-mongodb-s3-backup 

#javascript #node #mongodb #backup 

A Node.js Package That Makes Syncing A MongoDB Database to S3 Simple
Rupert  Beatty

Rupert Beatty

1658793060

Laravel-backup: A Package to Backup Your Laravel App

A modern backup solution for Laravel apps

This Laravel package creates a backup of your application. The backup is a zip file that contains all files in the directories you specify along with a dump of your database. The backup can be stored on any of the filesystems you have configured in Laravel.

Feeling paranoid about backups? No problem! You can backup your application to multiple filesystems at once.

Once installed taking a backup of your files and databases is very easy. Just issue this artisan command:

php artisan backup:run

But we didn't stop there. The package also provides a backup monitor to check the health of your backups. You can be notified via several channels when a problem with one of your backups is found. To avoid using excessive disk space, the package can also clean up old backups.

Installation and usage

This package requires PHP 8.0 and Laravel 8.0 or higher. You'll find installation instructions and full documentation on https://spatie.be/docs/laravel-backup.

Using an older version of PHP / Laravel?

If you are on a PHP version below 8.0 or a Laravel version below 8.0 just use an older version of this package.

Read the extensive documentation on version 3, on version 4, on version 5 and on version 6. We won't introduce new features to v6 and below anymore but we will still fix bugs.

Testing

Run the tests with:

composer test

Changelog

Please see CHANGELOG for more information on what has changed recently.

Contributing

Please see CONTRIBUTING for details.

Security

If you discover any security-related issues, please email security@spatie.be instead of using the issue tracker.

Postcardware

You're free to use this package, but if it makes it to your production environment we highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using.

Our address is: Spatie, Kruikstraat 22, 2018 Antwerp, Belgium.

We publish all received postcards on our company website.

Credits

And a special thanks to Caneco for the logo ✨

Support us

We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.

We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You'll find our address on our contact page. We publish all received postcards on our virtual postcard wall.

Author: Spatie
Source Code: https://github.com/spatie/laravel-backup 
License: MIT license

#laravel #backup #php #devops 

Laravel-backup: A Package to Backup Your Laravel App
Rupert  Beatty

Rupert Beatty

1658748120

Backup-manager: Database Backup Manager

Database Backup Manager 

This package provides a framework-agnostic database backup manager for dumping to and restoring databases from S3, Dropbox, FTP, SFTP, and Rackspace Cloud.

  • use version 2+ for >=PHP 7.3
  • use version 1 for <PHP 7.2
  • supports MySQL and PostgreSQL
  • compress with Gzip
  • framework-agnostic
  • dead simple configuration
  • Laravel Driver
  • Symfony Driver

Quick and Dirty

Configure your databases.

// config/database.php
'development' => [
    'type' => 'mysql',
    'host' => 'localhost',
    'port' => '3306',
    'user' => 'root',
    'pass' => 'password',
    'database' => 'test',
    // If singleTransaction is set to true, the --single-transcation flag will be set.
    // This is useful on transactional databases like InnoDB.
    // http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_single-transaction
    'singleTransaction' => false,
    // Do not dump the given tables
    // Set only table names, without database name
    // Example: ['table1', 'table2']
    // http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_ignore-table
    'ignoreTables' => [],
    // using ssl to connect to your database - active ssl-support (mysql only):
    'ssl'=>false,
    // add additional options to dump-command (like '--max-allowed-packet')
    'extraParams'=>null,
],
'production' => [
    'type' => 'postgresql',
    'host' => 'localhost',
    'port' => '5432',
    'user' => 'postgres',
    'pass' => 'password',
    'database' => 'test',
],

Configure your filesystems.

// config/storage.php
'local' => [
    'type' => 'Local',
    'root' => '/path/to/working/directory',
],
's3' => [
    'type' => 'AwsS3',
    'key'    => '',
    'secret' => '',
    'region' => 'us-east-1',
    'version' => 'latest',
    'bucket' => '',
    'root'   => '',
    'use_path_style_endpoint' => false,
],
'b2' => [
    'type' => 'B2',
    'key'    => '',
    'accountId' => '',
    'bucket' => '',
],
'gcs' => [
    'type' => 'Gcs',
    'key'    => '',
    'secret' => '',
    'version' => 'latest',
    'bucket' => '',
    'root'   => '',
],
'rackspace' => [
    'type' => 'Rackspace',
    'username' => '',
    'key' => '',
    'container' => '',
    'zone' => '',
    'root' => '',
],
'dropbox' => [
    'type' => 'DropboxV2',
    'token' => '',
    'key' => '',
    'secret' => '',
    'app' => '',
    'root' => '',
],
'ftp' => [
    'type' => 'Ftp',
    'host' => '',
    'username' => '',
    'password' => '',
    'root' => '',
    'port' => 21,
    'passive' => true,
    'ssl' => true,
    'timeout' => 30,
],
'sftp' => [
    'type' => 'Sftp',
    'host' => '',
    'username' => '',
    'password' => '',
    'root' => '',
    'port' => 21,
    'timeout' => 10,
    'privateKey' => '',
],
'flysystem' => [
    'type' => 'Flysystem',
    'name' => 's3_backup',
    //'prefix' => 'upload',
],
'doSpaces' => [
    'type' => 'AwsS3',
    'key' => '',
    'secret' => '',
    'region' => '',
    'bucket' => '',
    'root' => '',
    'endpoint' => '',
    'use_path_style_endpoint' => false,
],
'webdav' => [
    'type' => 'Webdav',
    'baseUri' => 'http://myserver.com',
    'userName' => '',
    'password' => '',
    'prefix' => '',
],

Backup to / restore from any configured database.

Backup the development database to Amazon S3. The S3 backup path will be test/backup.sql.gz in the end, when gzip is done with it.

use BackupManager\Filesystems\Destination;

$manager = require 'bootstrap.php';
$manager->makeBackup()->run('development', [new Destination('s3', 'test/backup.sql')], 'gzip');

Backup to / restore from any configured filesystem.

Restore the database file test/backup.sql.gz from Amazon S3 to the development database.

$manager = require 'bootstrap.php';
$manager->makeRestore()->run('s3', 'test/backup.sql.gz', 'development', 'gzip');

This package does not allow you to backup from one database type and restore to another. A MySQL dump is not compatible with PostgreSQL.

Requirements

  • PHP 5.5
  • MySQL support requires mysqldump and mysql command-line binaries
  • PostgreSQL support requires pg_dump and psql command-line binaries
  • Gzip support requires gzip and gunzip command-line binaries

Installation

Composer

Run the following to include this via Composer

composer require backup-manager/backup-manager

Then, you'll need to select the appropriate packages for the adapters that you want to use.

# to support s3
composer require league/flysystem-aws-s3-v3

# to support b2
composer require mhetreramesh/flysystem-backblaze

# to support google cs
composer require league/flysystem-aws-s3-v2

# to install the preferred dropbox v2 driver
composer required spatie/flysystem-dropbox

# to install legacy dropbox v2 driver
composer require srmklive/flysystem-dropbox-v2

# to support rackspace
composer require league/flysystem-rackspace

# to support sftp
composer require league/flysystem-sftp

# to support webdav (supported by owncloud nad many other)
composer require league/flysystem-webdav

Usage

Once installed, the package must be bootstrapped (initial configuration) before it can be used.

We've provided a native PHP example here.

The required bootstrapping can be found in the example here.

Contribution Guidelines

We recommend using the vagrant configuration supplied with this package for development and contribution. Simply install VirtualBox, Vagrant, and Ansible then run vagrant up in the root folder. A virtualmachine specifically designed for development of the package will be built and launched for you.

When contributing please consider the following guidelines:

  • Code style is PSR-2
    • Interfaces should NOT be suffixed with Interface, Traits should NOT be suffixed with Trait.
  • All methods and classes must contain docblocks.
  • Ensure that you submit tests that have minimal 100% coverage. Given the project's simplicity it just makes sense.
  • When planning a pull-request to add new functionality, it may be wise to submit a proposal to ensure compatibility with the project's goals.

Maintainers

This package is maintained by Shawn McCool and you!

Backwards Compatibility Breaks

3.0

Remove support for symfony 2. Specifically symfony/process versions < 3.x

Watch a video tour showing the Laravel driver in action to give you an idea what is possible.

Author: Backup-manager
Source Code: https://github.com/backup-manager/backup-manager 
License: MIT license

#laravel #backup #mysql #database #php 

Backup-manager: Database Backup Manager
Rupert  Beatty

Rupert Beatty

1658740620

Backup-manager/laravel: Laravel Driver for the Database Backup Manager

Laravel Driver for the Database Backup Manager

This package pulls in the framework agnostic Backup Manager and provides seamless integration with Laravel.

Note: This package is for Laravel integration only. For information about the framework-agnostic core package (or the Symfony driver) please see the base package repository.

Stability Notice

It's stable enough, you'll need to understand filesystem permissions.

This package is being actively developed, and we would like to get feedback to improve it. Please feel free to submit feedback.

Requirements

  • PHP 7.3+
  • Laravel 5.5+
  • MySQL support requires mysqldump and mysql command-line binaries
  • PostgreSQL support requires pg_dump and psql command-line binaries
  • Gzip support requires gzip and gunzip command-line binaries

Installation

Composer

Run the following to include this via Composer

composer require backup-manager/laravel

Then, you'll need to select the appropriate packages for the adapters that you want to use.

# to support s3 or google cs
composer require league/flysystem-aws-s3-v3

# to support dropbox
composer require srmklive/flysystem-dropbox-v2

# to support rackspace
composer require league/flysystem-rackspace

# to support sftp
composer require league/flysystem-sftp

# to support gcs
composer require superbalist/flysystem-google-storage

Laravel 5 Configuration

To install into a Laravel project, first do the composer install then add *ONE *of the following classes to your config/app.php service providers list.

// FOR LARAVEL 5.5 +
BackupManager\Laravel\Laravel55ServiceProvider::class,

Publish the storage configuration file.

php artisan vendor:publish --provider="BackupManager\Laravel\Laravel55ServiceProvider"

The Backup Manager will make use of Laravel's database configuration. But, it won't know about any connections that might be tied to other environments, so it can be best to just list multiple connections in the config/database.php file.

We can also add extra parameters on our backup manager commands by configuring extra params on .env file:

BACKUP_MANAGER_EXTRA_PARAMS="--column-statistics=0 --max-allowed-packet"

Lumen Configuration

To install into a Lumen project, first do the composer install then add the configuration file loader and ONE of the following service providers to your bootstrap/app.php.

// FOR LUMEN 5.5 AND ABOVE
$app->configure('backup-manager');
$app->register(BackupManager\Laravel\Lumen55ServiceProvider::class);

Copy the vendor/backup-manager/laravel/config/backup-manager.php file to config/backup-manager.php and configure it to suit your needs.

IoC Resolution

BackupManager\Manager can be automatically resolved through constructor injection thanks to Laravel's IoC container.

use BackupManager\Manager;

public function __construct(Manager $manager) {
    $this->manager = $manager;
}

It can also be resolved manually from the container.

$manager = App::make(\BackupManager\Manager::class);

Artisan Commands

There are three commands available db:backup, db:restore and db:list.

All will prompt you with simple questions to successfully execute the command.

Example Command for 24hour scheduled cronjob

php artisan db:backup --database=mysql --destination=dropbox --destinationPath=project --timestamp="d-m-Y" --compression=gzip

This command will backup your database to dropbox using mysql and gzip compresion in path /backups/project/DATE.gz (ex: /backups/project/31-7-2015.gz)

Scheduling Backups

It's possible to schedule backups using Laravel's scheduler.

/**
 * Define the application's command schedule.
 *
 * @param  \Illuminate\Console\Scheduling\Schedule  $schedule
 * @return void
 */
 protected function schedule(Schedule $schedule) {
     $environment = config('app.env');
     $schedule->command(
         "db:backup --database=mysql --destination=s3 --destinationPath=/{$environment}/projectname --timestamp="Y_m_d_H_i_s" --compression=gzip"
         )->twiceDaily(13,21);
 }

Contribution Guidelines

We recommend using the vagrant configuration supplied with this package for development and contribution. Simply install VirtualBox, Vagrant, and Ansible then run vagrant up in the root folder. A virtualmachine specifically designed for development of the package will be built and launched for you.

When contributing please consider the following guidelines:

  • please conform to the code style of the project, it's essentially PSR-2 with a few differences.
    1. The NOT operator when next to parenthesis should be surrounded by a single space. if ( ! is_null(...)) {.
    2. Interfaces should NOT be suffixed with Interface, Traits should NOT be suffixed with Trait.
  • All methods and classes must contain docblocks.
  • Ensure that you submit tests that have minimal 100% coverage.
  • When planning a pull-request to add new functionality, it may be wise to submit a proposal to ensure compatibility with the project's goals.

Maintainers

This package is maintained by Shawn McCool and open-source heroes.

Changelog

2.0

Released on 2020-04-30

Remove support for all Laravel versions below 5.5. All older versions should use the backup-manager ^1.0.

Since so many dependencies in Laravel / Symfony have changed it became impossible to support newer versions in the same code-base. Release ^1.0 is stable and is always accepting new stability fixes (we haven't seen anything to fix in a long time).

Watch a video tour to get an idea what is possible with this package.

Author: Backup-manager
Source Code: https://github.com/backup-manager/laravel 
License: MIT license

#laravel #driver #backup 

Backup-manager/laravel: Laravel Driver for the Database Backup Manager

DatabaseBackup: A Plugin to Export, Import and Manage Database Backups

cakephp-database-backup

DatabaseBackup is a CakePHP plugin to export, import and manage database backups. Currently, the plugin supports MySql, Postgres and Sqlite databases.

Installation

You can install the plugin via composer:

$ composer require --prefer-dist mirko-pagliai/cakephp-database-backup

Then you have to load the plugin. For more information on how to load the plugin, please refer to the Cookbook.

Simply, you can execute the shell command to enable the plugin:

bin/cake plugin load DatabaseBackup

This would update your application's bootstrap method.

By default the plugin uses the APP/backups directory to save the backups files. So you have to create the directory and make it writable:

$ mkdir backups/ && chmod 775 backups/

If you want to use a different directory, read the Configuration section.

Installation on older CakePHP and PHP versions

Recent packages and the master branch require at least CakePHP 4.0 and PHP 7.2. Instead, the cakephp3 branch requires at least PHP 5.6.

In this case, you can install the package as well:

$ composer require --prefer-dist mirko-pagliai/cakephp-database-backup:dev-cakephp3

Note that the cakephp3 branch will no longer be updated as of April 29, 2021, except for security patches, and it matches the 2.8.5 version.

Requirements

DatabaseBackup requires:

  • mysql and mysqldump for MySql databases;
  • pg_dump and pg_restore for Postgres databases;
  • sqlite3 for Sqlite databases.

Optionally, if you want to handle compressed backups, bzip2 and gzip are also required.

The installation of these binaries may vary depending on your operating system.

Please forward, remember that the database user must have the correct permissions (for example, for mysql the user must have the LOCK TABLES permission).

Configuration

The plugin uses some configuration parameters. See our wiki:

If you want to send backup files by email, remember to set up your application correctly so that it can send emails. For more information on how to configure your application, see the Cookbook.

How to use

See our wiki:

And refer to our API.

Testing

Tests are run for only one driver at a time, by default mysql. To choose another driver to use, you can set the driver_test environment variable before running phpunit.

For example:

driver_test=sqlite vendor/bin/phpunit
driver_test=postgres vendor/bin/phpunit

Alternatively, you can set the db_dsn environment variable, indicating the connection parameters. In this case, the driver type will still be detected automatically.

For example:

db_dsn=sqlite:///' . TMP . 'example.sq3 vendor/bin/phpunit

Versioning

For transparency and insight into our release cycle and to maintain backward compatibility, DatabaseBackup will be maintained under the Semantic Versioning guidelines.

Did you like this plugin? Its development requires a lot of time for me. Please consider the possibility of making a donation: even a coffee is enough! Thank you.

Make a donation

Author: Mirko-pagliai
Source Code: https://github.com/mirko-pagliai/cakephp-database-backup 
License: View license

#php #cakephp #database #backup 

DatabaseBackup: A Plugin to Export, Import and Manage Database Backups

Restic: Fast, Secure, Efficient Backup Program

Introduction

restic is a backup program that is fast, efficient and secure. It supports the three major operating systems (Linux, macOS, Windows) and a few smaller ones (FreeBSD, OpenBSD).

Quick start

Once you've installed restic, start off with creating a repository for your backups:

$ restic init --repo /tmp/backup
enter password for new backend:
enter password again:
created restic backend 085b3c76b9 at /tmp/backup
Please note that knowledge of your password is required to access the repository.
Losing your password means that your data is irrecoverably lost.

and add some data:

$ restic --repo /tmp/backup backup ~/work
enter password for repository:
scan [/home/user/work]
scanned 764 directories, 1816 files in 0:00
[0:29] 100.00%  54.732 MiB/s  1.582 GiB / 1.582 GiB  2580 / 2580 items  0 errors  ETA 0:00
duration: 0:29, 54.47MiB/s
snapshot 40dc1520 saved

Next you can either use restic restore to restore files or use restic mount to mount the repository via fuse and browse the files from previous snapshots.

For more options check out the online documentation.

Backends

Saving a backup on the same machine is nice but not a real backup strategy. Therefore, restic supports the following backends for storing backups natively:

Design Principles

Restic is a program that does backups right and was designed with the following principles in mind:

Easy: Doing backups should be a frictionless process, otherwise you might be tempted to skip it. Restic should be easy to configure and use, so that, in the event of a data loss, you can just restore it. Likewise, restoring data should not be complicated.

Fast: Backing up your data with restic should only be limited by your network or hard disk bandwidth so that you can backup your files every day. Nobody does backups if it takes too much time. Restoring backups should only transfer data that is needed for the files that are to be restored, so that this process is also fast.

Verifiable: Much more important than backup is restore, so restic enables you to easily verify that all data can be restored.

Secure: Restic uses cryptography to guarantee confidentiality and integrity of your data. The location the backup data is stored is assumed not to be a trusted environment (e.g. a shared space where others like system administrators are able to access your backups). Restic is built to secure your data against such attackers.

Efficient: With the growth of data, additional snapshots should only take the storage of the actual increment. Even more, duplicate data should be de-duplicated before it is actually written to the storage back end to save precious backup space.

Reproducible Builds

The binaries released with each restic version starting at 0.6.1 are reproducible, which means that you can reproduce a byte identical version from the source code for that release. Instructions on how to do that are contained in the builder repository.

News

You can follow the restic project on Twitter @resticbackup or by subscribing to the project blog.


For detailed usage and installation instructions check out the documentation.

You can ask questions in our Discourse forum.


Author: Restic
Source Code: https://github.com/restic/restic 
License: BSD-2-Clause license

#go #golang #backup 

Restic: Fast, Secure, Efficient Backup Program

Hacer Una Copia De Seguridad De Sus Artículos De Hashnode En GitHub

Muchos desarrolladores tienen un blog personal en Hashnode , una de las comunidades de blogs más populares para las personas que exploran la tecnología.

También escribo en Hashnode, pero recientemente estuve pensando si había alguna forma de hacer una copia de seguridad de mis artículos publicados en GitHub. Afortunadamente, encontré una manera fácil de hacerlo directamente desde Hashnode.

Entonces, en este artículo, compartiré el proceso para que también puedas crear una copia de seguridad automática de tus artículos en tu repositorio de GitHub. No hay nada de malo en probar algo nuevo, ¿verdad? 😊

✨ También creé un video completo que muestra el proceso de creación de esa copia de seguridad automática en su repositorio de GitHub. He adjuntado el video más adelante en este artículo. También puedes comprobar eso.

En primer lugar, déjame mostrarte mi cuenta de Hashnode. Esta es solo una muestra de un perfil típico allí.

2022-04-28-13-42-26.00_00_31_58.Still002

He publicado solo un artículo (a día de hoy, 15 de mayo de 2022). Pero aún quiero crear un proceso de respaldo automático para esta cuenta para que cada vez que escriba algo nuevo, todos los artículos se copien en mi repositorio de GitHub seleccionado como respaldo.

Así que hagamos eso, ¿de acuerdo? 😁

Cómo hacer una copia de seguridad de sus artículos de Hashnode en GitHub

Ve a tu cuenta de GitHub. Primero debemos crear un repositorio especial donde crearemos un proceso de respaldo automático.

Captura de pantalla-2022-05-15-093838

Ahora simplemente haga clic en el +botón en la parte superior derecha de la página web.

2022-04-28-13-42-26.00_01_54_29.Still004

Ahora haz clic en Nuevo repositorio .

2022-04-28-13-42-26.00_02_00_23.Still005-1

Ahora tenemos que crear un nuevo repositorio. El proceso es similar a la creación de cualquier otro repositorio.

2022-04-28-13-42-26.00_02_03_28.Still006

Puede hacer que el repositorio sea Público o Privado como mejor le parezca. Lo estoy convirtiendo en un repositorio privado, pero no tienes que hacerlo si no quieres.

2022-04-28-13-42-26.00_02_41_42.Still008

Asigne al repositorio el nombre que desee.

2022-04-28-13-42-26.00_02_57_02.Still009

Para este último, puede seleccionar lo que desee. Para este artículo, lo mantendré simple como estaba.

2022-04-28-13-42-26.00_03_18_19.Still010

Luego haz clic en Crear repositorio .

2022-04-28-13-42-26.00_03_37_16.Still011

Después de eso, en realidad no necesita hacer nada en ese repositorio por ahora.

2022-04-28-13-42-26.00_03_41_19.Still012

Por lo tanto, lo mantendré como está ahora.

Ahora, dirígete a tu cuenta de Hashnode.

Haga clic en el icono de su perfil en la parte superior derecha de la página de perfil.

2022-04-28-13-42-26.00_03_46_36.Still013

Seleccione el blog que desea respaldar. Para mí, es https://fahimbinamin.hashnode.dev/ .

2022-04-28-13-42-26.00_03_47_50.Still014

Haz clic en Panel de blogs.

2022-04-28-13-42-26.00_03_53_36.Still015

El tablero del blog aparecerá ante ti.

2022-04-28-13-42-26.00_03_58_22.Still016

Simplemente desplácese hacia abajo hasta que encuentre Copia de seguridad. Lo encontrará en la parte inferior izquierda de la página web. Haz clic en eso.

2022-04-28-13-42-26.00_04_12_46.Still017

Encontrará la página de copia de seguridad de GitHub ahora.

2022-04-28-13-42-26.00_05_03_47.Todavía020

Haz clic en Hacer una copia de seguridad de todas mis publicaciones.

2022-04-28-13-42-26.00_05_03_47.Still020-1

Seleccione dónde desea instalarlo.

2022-04-28-13-42-26.00_05_06_45.Todavía021

Como estoy involucrado en 5 organizaciones de GitHub en este momento, me las está sugiriendo todas. Como quiero crear la copia de seguridad en mi cuenta personal de GitHub, seleccionaré mi perfil FahimFBA .

2022-04-28-13-42-26.00_05_33_05.Still022

Aparecerá la sección de instalación y autorización.

2022-04-28-13-42-26.00_05_44_25.Still023

Por defecto, seleccionará "Todos los repositorios". Pero tenga en cuenta que no debe seleccionar Todos los repositorios, ya que eso reescribirá todos los repositorios que tiene actualmente en su cuenta personal de GitHub.

2022-04-28-13-42-26.00_05_50_45.Still024

Seleccione "Solo seleccionar repositorios" y busque el repositorio que ha creado para hacer una copia de seguridad de sus blogs de Hashnode.

2022-04-28-13-42-26.00_06_14_16.Still025

Simplemente haga clic y seleccione el repositorio.

2022-04-28-13-42-26.00_06_19_37.Still026

Si todo está bien, puede hacer clic en "Instalar y autorizar", pero nuevamente tenga en cuenta que no debe seleccionar Todos los repositorios.

2022-04-28-13-42-26.00_06_39_58.Still029

Proporcione la contraseña de su cuenta de GitHub si se la solicita.

2022-04-28-13-42-26.00_06_42_26.Still030

Si lo redirigirá a Hashnode para continuar con el proceso de instalación.

2022-04-28-13-42-26.00_06_47_38.Still031

Ya ha creado su copia de seguridad automática. Si desea hacer una copia de seguridad de las publicaciones existentes, debe hacer clic en "Hacer una copia de seguridad de todas mis publicaciones".

2022-04-28-13-42-26.00_07_31_47.Still032

Hará una copia de seguridad de todas las publicaciones que tenía antes de crear la copia de seguridad automática.

2022-04-28-13-42-26.00_07_33_09.Still033

2022-04-28-13-42-26.00_07_38_56.Still034

Ahora, si simplemente actualiza la página web del repositorio de GitHub (el repositorio que creó solo para hacer una copia de seguridad de este blog de Hashnode), verá que también se ha actualizado.

2022-04-28-13-42-26.00_07_59_50.Still036

Incluso puede leer el artículo directamente desde su repositorio respaldado.

2022-04-28-13-42-26.00_08_02_00.Still037

¡Eso es todo!

Conclusión

Quiero agradecerles a todos desde el fondo de mi corazón por leer el artículo completo. Si quieres chatear conmigo, estoy disponible en Twitter y LinkedIn para eso.

También estoy disponible en:

GitHub: https://github.com/FahimFBA 

Fuente: https://www.freecodecamp.org/news/how-to-backup-hashnode-articles-to-github/

 #backup #hashnode

Hacer Una Copia De Seguridad De Sus Artículos De Hashnode En GitHub
坂本  篤司

坂本 篤司

1652775480

ハッシュノードの記事をGitHubにバックアップする方法

多くの開発者は、Hashnodeに個人的なブログを持っています。これは、テクノロジーを探求する人々に最も人気のあるブログコミュニティの1つです。

Hashnodeにも書いていますが、最近、公開した記事をGitHubにバックアップする方法があるかどうかを考えていました。幸い、Hashnode自体から直接それを行う簡単な方法を見つけました。

そのため、この記事では、GitHubリポジトリへの記事の自動バックアップも作成できるようにプロセスを共有します。何か新しいことを試みても害はありませんよね?😊

✨GitHubリポジトリへの自動バックアップを作成するプロセスを示す完全な長さのビデオも作成しました。この記事の後半でビデオを添付しました。それをチェックすることもできます。

まず、私のHashnodeアカウントをお見せしましょう。これは、そこにある典型的なプロファイルのほんの一例です。

2022-04-28-13-42-26.00_00_31_58.Still002

私は1つの記事だけを公開しました(今日、2022年5月15日現在)。ただし、このアカウントの自動バックアッププロセスを作成して、何か新しいものを作成するたびに、すべての記事がバックアップとして選択したGitHubリポジトリにコピーされるようにします。

それでは、そうしましょう。😁

ハッシュノードの記事をGitHubにバックアップする方法

GitHubアカウントに移動します。最初に、自動バックアッププロセスを作成する特別なリポジトリを作成する必要があります。

スクリーンショット-2022-05-15-093838

+次に、Webページの右上にあるボタンをクリックするだけです。

2022-04-28-13-42-26.00_01_54_29.Still004

次に、[新しいリポジトリ]をクリックします。

2022-04-28-13-42-26.00_02_00_23.Still005-1

次に、新しいリポジトリを作成する必要があります。このプロセスは、他のリポジトリを作成するのと似ています。

2022-04-28-13-42-26.00_02_03_28.Still006

自分に合ったリポジトリをパブリックまたはプライベートにすることができます。私はそれをプライベートリポジトリにしていますが、必要がなければそうする必要はありません。

2022-04-28-13-42-26.00_02_41_42.Still008

リポジトリに任意の名前を付けます。

2022-04-28-13-42-26.00_02_57_02.Still009

後者の場合は、好きなものを選択できます。この記事では、そのままシンプルにしています。

2022-04-28-13-42-26.00_03_18_19.Still010

次に、[リポジトリの作成]をクリックします。

2022-04-28-13-42-26.00_03_37_16.Still011

その後、今のところ、そのリポジトリで実際に何もする必要はありません。

2022-04-28-13-42-26.00_03_41_19.Still012

なので、今のままです。

次に、Hashnodeアカウントにアクセスします。

プロフィールページの右上にあるプロフィールアイコンをクリックします。

2022-04-28-13-42-26.00_03_46_36.Still013

バックアップするブログを選択します。私にとってはhttps://fahimbinamin.hashnode.dev/です。

2022-04-28-13-42-26.00_03_47_50.Still014

ブログダッシュボードをクリックします。

2022-04-28-13-42-26.00_03_53_36.Still015

ブログダッシュボードが目の前に表示されます。

2022-04-28-13-42-26.00_03_58_22.Still016

バックアップが見つかるまで下にスクロールするだけです。あなたはそれをウェブページの左下に見つけるでしょう。それをクリックします。

2022-04-28-13-42-26.00_04_12_46.Still017

GitHubバックアップページが表示されます。

2022-04-28-13-42-26.00_05_03_47.Still020

[すべての投稿をバックアップ]をクリックします。

2022-04-28-13-42-26.00_05_03_47.Still020-1

インストールする場所を選択します。

2022-04-28-13-42-26.00_05_06_45.Still021

私は現在5つのGitHub組織に参加しているので、それらすべてを私に提案しています。個人のGitHubアカウントでバックアップを作成したいので、プロファイルFahimFBAを選択します。

2022-04-28-13-42-26.00_05_33_05.Still022

インストールと承認のセクションが表示されます。

2022-04-28-13-42-26.00_05_44_25.Still023

デフォルトでは、「すべてのリポジトリ」が選択されます。ただし、[すべてのリポジトリ]を選択しないでください。選択すると、個人のGitHubアカウントに現在あるすべてのリポジトリが書き換えられます。

2022-04-28-13-42-26.00_05_50_45.Still024

「リポジトリのみを選択」を選択し、Hashnodeブログをバックアップするために作成したリポジトリを見つけます。

2022-04-28-13-42-26.00_06_14_16.Still025

リポジトリをクリックして選択するだけです。

2022-04-28-13-42-26.00_06_19_37.Still026

すべて問題がなければ、[インストールして承認]をクリックできますが、[ すべてのリポジトリ]を選択してはならないことに注意してください。

2022-04-28-13-42-26.00_06_39_58.Still029

要求された場合は、GitHubアカウントのパスワードを入力してください。

2022-04-28-13-42-26.00_06_42_26.Still030

インストールプロセスを続行するためにHashnodeにリダイレクトされます。

2022-04-28-13-42-26.00_06_47_38.Still031

これで、自動バックアップが作成されました。既存の投稿をバックアップする場合は、[すべての投稿をバックアップする]をクリックする必要があります。

2022-04-28-13-42-26.00_07_31_47.Still032

自動バックアップを作成する前に、すべての投稿をバックアップします。

2022-04-28-13-42-26.00_07_33_09.Still033

2022-04-28-13-42-26.00_07_38_56.Still034

ここで、GitHubレポジトリWebページ(このHashnodeブログをバックアップするためだけに作成したレポジトリ)を更新するだけで、それも更新されていることがわかります。

2022-04-28-13-42-26.00_07_59_50.Still036

バックアップしたリポジトリから直接記事を読むこともできます。

2022-04-28-13-42-26.00_08_02_00.Still037

それでおしまい!

結論

記事全体をお読みいただき、誠にありがとうございました。私とチャットしたい場合は、TwitterLinkedInで利用できます。

私はまた利用できます:

GitHub:https ://github.com/FahimFBA 

ソース:https ://www.freecodecamp.org/news/how-to-backup-hashnode-articles-to-github/

 #backup #hashnode 

ハッシュノードの記事をGitHubにバックアップする方法

Backup-manager: Database Backup Manager

Database Backup Manager

This package provides a framework-agnostic database backup manager for dumping to and restoring databases from S3, Dropbox, FTP, SFTP, and Rackspace Cloud.

  • use version 2+ for >=PHP 7.3
  • use version 1 for <PHP 7.2

Watch a video tour showing the Laravel driver in action to give you an idea what is possible.

Table of Contents

Quick and Dirty

Configure your databases.

// config/database.php
'development' => [
    'type' => 'mysql',
    'host' => 'localhost',
    'port' => '3306',
    'user' => 'root',
    'pass' => 'password',
    'database' => 'test',
    // If singleTransaction is set to true, the --single-transcation flag will be set.
    // This is useful on transactional databases like InnoDB.
    // http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_single-transaction
    'singleTransaction' => false,
    // Do not dump the given tables
    // Set only table names, without database name
    // Example: ['table1', 'table2']
    // http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_ignore-table
    'ignoreTables' => [],
    // using ssl to connect to your database - active ssl-support (mysql only):
    'ssl'=>false,
    // add additional options to dump-command (like '--max-allowed-packet')
    'extraParams'=>null,
],
'production' => [
    'type' => 'postgresql',
    'host' => 'localhost',
    'port' => '5432',
    'user' => 'postgres',
    'pass' => 'password',
    'database' => 'test',
],

Configure your filesystems.

// config/storage.php
'local' => [
    'type' => 'Local',
    'root' => '/path/to/working/directory',
],
's3' => [
    'type' => 'AwsS3',
    'key'    => '',
    'secret' => '',
    'region' => 'us-east-1',
    'version' => 'latest',
    'bucket' => '',
    'root'   => '',
    'use_path_style_endpoint' => false,
],
'b2' => [
    'type' => 'B2',
    'key'    => '',
    'accountId' => '',
    'bucket' => '',
],
'gcs' => [
    'type' => 'Gcs',
    'key'    => '',
    'secret' => '',
    'version' => 'latest',
    'bucket' => '',
    'root'   => '',
],
'rackspace' => [
    'type' => 'Rackspace',
    'username' => '',
    'key' => '',
    'container' => '',
    'zone' => '',
    'root' => '',
],
'dropbox' => [
    'type' => 'DropboxV2',
    'token' => '',
    'key' => '',
    'secret' => '',
    'app' => '',
    'root' => '',
],
'ftp' => [
    'type' => 'Ftp',
    'host' => '',
    'username' => '',
    'password' => '',
    'root' => '',
    'port' => 21,
    'passive' => true,
    'ssl' => true,
    'timeout' => 30,
],
'sftp' => [
    'type' => 'Sftp',
    'host' => '',
    'username' => '',
    'password' => '',
    'root' => '',
    'port' => 21,
    'timeout' => 10,
    'privateKey' => '',
],
'flysystem' => [
    'type' => 'Flysystem',
    'name' => 's3_backup',
    //'prefix' => 'upload',
],
'doSpaces' => [
    'type' => 'AwsS3',
    'key' => '',
    'secret' => '',
    'region' => '',
    'bucket' => '',
    'root' => '',
    'endpoint' => '',
    'use_path_style_endpoint' => false,
],
'webdav' => [
    'type' => 'Webdav',
    'baseUri' => 'http://myserver.com',
    'userName' => '',
    'password' => '',
    'prefix' => '',
],

Backup to / restore from any configured database.

Backup the development database to Amazon S3. The S3 backup path will be test/backup.sql.gz in the end, when gzip is done with it.

use BackupManager\Filesystems\Destination;

$manager = require 'bootstrap.php';
$manager->makeBackup()->run('development', [new Destination('s3', 'test/backup.sql')], 'gzip');

Backup to / restore from any configured filesystem.

Restore the database file test/backup.sql.gz from Amazon S3 to the development database.

$manager = require 'bootstrap.php';
$manager->makeRestore()->run('s3', 'test/backup.sql.gz', 'development', 'gzip');

This package does not allow you to backup from one database type and restore to another. A MySQL dump is not compatible with PostgreSQL.

Requirements

  • PHP 5.5
  • MySQL support requires mysqldump and mysql command-line binaries
  • PostgreSQL support requires pg_dump and psql command-line binaries
  • Gzip support requires gzip and gunzip command-line binaries

Installation

Composer

Run the following to include this via Composer

composer require backup-manager/backup-manager

Then, you'll need to select the appropriate packages for the adapters that you want to use.

# to support s3
composer require league/flysystem-aws-s3-v3

# to support b2
composer require mhetreramesh/flysystem-backblaze

# to support google cs
composer require league/flysystem-aws-s3-v2

# to install the preferred dropbox v2 driver
composer required spatie/flysystem-dropbox

# to install legacy dropbox v2 driver
composer require srmklive/flysystem-dropbox-v2

# to support rackspace
composer require league/flysystem-rackspace

# to support sftp
composer require league/flysystem-sftp

# to support webdav (supported by owncloud nad many other)
composer require league/flysystem-webdav

Usage

Once installed, the package must be bootstrapped (initial configuration) before it can be used.

We've provided a native PHP example here.

The required bootstrapping can be found in the example here.

Contribution Guidelines

We recommend using the vagrant configuration supplied with this package for development and contribution. Simply install VirtualBox, Vagrant, and Ansible then run vagrant up in the root folder. A virtualmachine specifically designed for development of the package will be built and launched for you.

When contributing please consider the following guidelines:

  • Code style is PSR-2
    • Interfaces should NOT be suffixed with Interface, Traits should NOT be suffixed with Trait.
  • All methods and classes must contain docblocks.
  • Ensure that you submit tests that have minimal 100% coverage. Given the project's simplicity it just makes sense.
  • When planning a pull-request to add new functionality, it may be wise to submit a proposal to ensure compatibility with the project's goals.

Maintainers

This package is maintained by Shawn McCool and you!

Backwards Compatibility Breaks

3.0

Remove support for symfony 2. Specifically symfony/process versions < 3.x

Author: Backup-manager
Source Code: https://github.com/backup-manager/backup-manager 
License: MIT License

#php #cloud #backup 

Backup-manager: Database Backup Manager
Hermann  Frami

Hermann Frami

1650169320

Serverless Plugin Dynamodb Backups

Serverless plugin DynamoDB backups

Introduction

If you want to automate your AWS DynamoDB database backups, this plugin may be what you need.

As we build various services on AWS using the "serverless" design, we need reusable backups services, both scalable and easy to implement. We therefore created this plugin, to make sure that each project can create its own DynamoDB automated backup solution.

This is a plugin which simplifies DynamoDB backups creation automation for all the resources created in serverless.yml when using the Serverless Framework and AWS Cloud provider.

This plugin officially supports Node.js 12.x, 14.x and 16.x.

This plugin officially supports Serverless Framework >=2.0.0.

Benefits

  • Automated Backups on your configured resources (serverless.yml)
  • Report Error on slack channel (see configuration)
  • Delete old Backups automatically (AKA "managed backups retention") (see configuration)

Installation

Install the plugin using either Yarn or NPM. (we use Yarn)

NPM:

npm install @unly/serverless-plugin-dynamodb-backups

YARN:

yarn add @unly/serverless-plugin-dynamodb-backups

Usage

Step 1: Load the Plugin

The plugin determines your environment during deployment and adds all environment variables to your Lambda function. All you need to do is to load the plugin:

Must be declared before serverless-webpack, despite what their officially doc says

plugins:
  - '@unly/serverless-plugin-dynamodb-backups' # Must be first, even before "serverless-webpack", see https://github.com/UnlyEd/serverless-plugin-dynamodb-backups
  - serverless-webpack # Must be second, see https://github.com/99xt/serverless-dynamodb-local#using-with-serverless-offline-and-serverless-webpack-plugin

Step 2: Create the backups handler function:

Create a file, which will be called when performing a DynamoDB backup (we named it src/backups.js in our examples folder):

import dynamodbAutoBackups from '@unly/serverless-plugin-dynamodb-backups/lib';

export const handler = dynamodbAutoBackups;

Step 3: Configure your serverless.yml

Set the dynamodbAutoBackups object configuration as follows (list of all available options below):

custom:
  dynamodbAutoBackups:
    backupRate: rate(40 minutes) # Every 5 minutes, from the time it was deployed
    source: src/backups.js # Path to the handler function we created in step #2
    active: true

Configuration of dynamodbAutoBackups object:

AttributesTypeRequiredDefaultDescription
sourceStringTrue Path to your handler function
backupRateStringTrue The schedule on which you want to backup your table. You can use either rate syntax (rate(1 hour)) or cron syntax (cron(0 12 * * ? *)). See here for more details on configuration.
nameStringFalseautoAutomatically set, but you could provide your own name for this lambda
slackWebhookStringFalse A HTTPS endpoint for an incoming webhook to Slack. If provided, it will send error messages to a Slack channel.
backupRemovalEnabledBooleanFalsefalseEnables cleanup of old backups. See the below option "backupRetentionDays" to specify the retention period. By default, backup removal is disabled.
backupRetentionDaysIntegerFalse Specify the number of days to retain old backups. For example, setting the value to 2 will remove all backups that are older than 2 days. Required if backupRemovalEnabled is true.
backupTypeStringFalse"ALL"* USER - On-demand backup created by you. * SYSTEM - On-demand backup automatically created by DynamoDB. * ALL - All types of on-demand backups (USER and SYSTEM).
activeBooleanFalsetrueYou can disable this plugin, useful to disable the plugin on a non-production environment, for instance

Generated by https://www.tablesgenerator.com/markdown_tables


Examples of configurations:

1. Creates backups every 40 minutes, delete all backups older than 15 days, send slack notifications if backups are not created.

custom:
  dynamodbAutoBackups:
    backupRate: rate(40 minutes) # Every 40 minutes, from the time it was deployed
    source: src/backups.js
    slackWebhook: https://hooks.slack.com/services/T4XHXX5C6/TT3XXXM0J/XXXXXSbhCXXXX77mFBr0ySAm
    backupRemovalEnabled: true # Enable backupRetentionDays
    backupRetentionDays: 15 # If backupRemovalEnabled is not provided, then backupRetentionDays is not used

2. Creates some backups every friday at 2:00 am, delete all backups created by USER longer than 3 days, be warned if backups are not created.

custom:
  dynamodbAutoBackups:
    backupRate: cron(0 2 ? * FRI *) # Every friday at 2:00 am
    source: src/backups.js
    slackWebhook: https://hooks.slack.com/services/T4XHXX5C6/TT3XXXM0J/XXXXXSbhCXXXX77mFBr0ySAm
    backupRemovalEnabled: true # Enable backupRetentionDays
    backupRetentionDays: 3 # If backupRemovalEnabled is not provided, then backupRetentionDays is not used
    backupType: USER  # Delete all backups created by a user, not the system backups

Try it out yourself

To test this plugin, you can clone this repository. Go to examples/serverless-example, and follow the README.


Vulnerability disclosure

See our policy.


Contributors and maintainers

This project is being maintained by:

Thanks to our contributors:

Author: UnlyEd
Source Code: https://github.com/UnlyEd/serverless-plugin-dynamodb-backups 
License: MIT License

#serverless #plugin #backup 

Serverless Plugin Dynamodb Backups