A Self-hosted Archiving Service integrated with Internet Archive

Wayback

Wayback is a tool that supports running as a command-line tool and docker container, purpose to snapshot webpage to time capsules.

Features

  • Free and open-source
  • Cross-platform compatibility
  • Batch wayback URLs for faster archiving
  • Built-in CLI (wayback) for convenient use
  • Serve as a Tor Hidden Service or local web entry for added privacy and accessibility
  • Easier wayback to Internet Archive, archive.today, IPFS and Telegraph integration
  • Interactive with IRC, Matrix, Telegram bot, Discord bot, Mastodon, and Twitter as a daemon service for convenient use
  • Supports publishing wayback results to Telegram channel, Mastodon, and GitHub Issues for sharing
  • Supports storing archived files to disk for offline use
  • Download streaming media (requires FFmpeg) for convenient media archiving.

Installation

The simplest, cross-platform way is to download from GitHub Releases and place the executable file in your PATH.

From source:

go install github.com/wabarc/wayback/cmd/wayback@latest

From GitHub Releases:

curl -fsSL https://github.com/wabarc/wayback/raw/main/install.sh | sh

or via Bina:

curl -fsSL https://bina.egoist.dev/wabarc/wayback | sh

Using Snapcraft (on GNU/Linux)

sudo snap install wayback

Via APT:

curl -fsSL https://repo.wabarc.eu.org/apt/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/packages.wabarc.gpg
echo "deb [arch=amd64,arm64,armhf signed-by=/usr/share/keyrings/packages.wabarc.gpg] https://repo.wabarc.eu.org/apt/ /" | sudo tee /etc/apt/sources.list.d/wayback.list
sudo apt update
sudo apt install wayback

Via RPM:

sudo rpm --import https://repo.wabarc.eu.org/yum/gpg.key
sudo tee /etc/yum.repos.d/wayback.repo > /dev/null <<EOT
[wayback]
name=Wayback Archiver
baseurl=https://repo.wabarc.eu.org/yum/
enabled=1
gpgcheck=1
gpgkey=https://repo.wabarc.eu.org/yum/gpg.key
EOT

sudo dnf install -y wayback

Via Homebrew:

brew tap wabarc/wayback
brew install wayback

Usage

Command line

$ wayback -h

A command-line tool and daemon service for archiving webpages.

Usage:
  wayback [flags]

Examples:
  wayback https://www.wikipedia.org
  wayback https://www.fsf.org https://www.eff.org
  wayback --ia https://www.fsf.org
  wayback --ia --is -d telegram -t your-telegram-bot-token
  WAYBACK_SLOT=pinata WAYBACK_APIKEY=YOUR-PINATA-APIKEY \
    WAYBACK_SECRET=YOUR-PINATA-SECRET wayback --ip https://www.fsf.org

Flags:
      --chatid string      Telegram channel id
  -c, --config string      Configuration file path, defaults: ./wayback.conf, ~/wayback.conf, /etc/wayback.conf
  -d, --daemon strings     Run as daemon service, supported services are telegram, web, mastodon, twitter, discord, slack, irc
      --debug              Enable debug mode (default mode is false)
  -h, --help               help for wayback
      --ia                 Wayback webpages to Internet Archive
      --info               Show application information
      --ip                 Wayback webpages to IPFS
      --ipfs-host string   IPFS daemon host, do not require, unless enable ipfs (default "127.0.0.1")
  -m, --ipfs-mode string   IPFS mode (default "pinner")
  -p, --ipfs-port uint     IPFS daemon port (default 5001)
      --is                 Wayback webpages to Archive Today
      --ph                 Wayback webpages to Telegraph
      --print              Show application configurations
  -t, --token string       Telegram Bot API Token
      --tor                Snapshot webpage via Tor anonymity network
      --tor-key string     The private key for Tor Hidden Service
  -v, --version            version for wayback

Examples

Wayback one or more url to Internet Archive and archive.today:

wayback https://www.wikipedia.org

wayback https://www.fsf.org https://www.eff.org

Wayback url to Internet Archive or archive.today or IPFS:

// Internet Archive
$ wayback --ia https://www.fsf.org

// archive.today
$ wayback --is https://www.fsf.org

// IPFS
$ wayback --ip https://www.fsf.org

For using IPFS, also can specify a pinning service:

$ export WAYBACK_SLOT=pinata
$ export WAYBACK_APIKEY=YOUR-PINATA-APIKEY
$ export WAYBACK_SECRET=YOUR-PINATA-SECRET
$ wayback --ip https://www.fsf.org

// or

$ WAYBACK_SLOT=pinata WAYBACK_APIKEY=YOUR-PINATA-APIKEY \
$ WAYBACK_SECRET=YOUR-PINATA-SECRET wayback --ip https://www.fsf.org

More details about pinning service.

With telegram bot:

wayback --ia --is --ip -d telegram -t your-telegram-bot-token

Publish message to your Telegram channel at the same time:

wayback --ia --is --ip -d telegram -t your-telegram-bot-token --chatid your-telegram-channel-name

Also can run with debug mode:

wayback -d telegram -t YOUR-BOT-TOKEN --debug

Both serve on Telegram and Tor hidden service:

wayback -d telegram -t YOUT-BOT-TOKEN -d web

URLs from file:

wayback url.txt
cat url.txt | wayback

Configuration Parameters

By default, wayback looks for configuration options from this files, the following are parsed:

  • ./wayback.conf
  • ~/wayback.conf
  • /etc/wayback.conf

Use the -c / --config option to specify the build definition file to use.

You can also specify configuration options either via command flags or via environment variables, an overview of all options below.

FlagsEnvironment VariableDefaultDescription
--debugDEBUGfalseEnable debug mode, override LOG_LEVEL
-c, --config--Configuration file path, defaults: ./wayback.conf, ~/wayback.conf, /etc/wayback.conf
-LOG_TIMEtrueDisplay the date and time in log messages
-LOG_LEVELinfoLog level, supported level are debug, info, warn, error, fatal, defaults to info
-ENABLE_METRICSfalseEnable metrics collector
-WAYBACK_LISTEN_ADDR0.0.0.0:8964The listen address for the HTTP server
-CHROME_REMOTE_ADDR-Chrome/Chromium remote debugging address, for screenshot, format: host:port, wss://domain.tld
-WAYBACK_POOLING_SIZE3Number of worker pool for wayback at once
-WAYBACK_BOLT_PATH./wayback.dbFile path of bolt database
-WAYBACK_STORAGE_DIR-Directory to store binary file, e.g. PDF, html file
-WAYBACK_MAX_MEDIA_SIZE512MBMax size to limit download stream media
-WAYBACK_MEDIA_SITES-Extra media websites wish to be supported, separate with comma
-WAYBACK_TIMEOUT300Timeout for single wayback request, defaults to 300 second
-WAYBACK_MAX_RETRIES2Max retries for single wayback request, defaults to 2
-WAYBACK_USERAGENTWaybackArchiver/1.0User-Agent for a wayback request
-WAYBACK_FALLBACKoffUse Google cache as a fallback if the original webpage is unavailable
-WAYBACK_MEILI_ENDPOINT-Meilisearch API endpoint
-WAYBACK_MEILI_INDEXINGcapsulesMeilisearch indexing name
-WAYBACK_MEILI_APIKEY-Meilisearch admin API key
-d, --daemon--Run as daemon service, e.g. telegram, web, mastodon, twitter, discord
--iaWAYBACK_ENABLE_IAtrueWayback webpages to Internet Archive
--isWAYBACK_ENABLE_IStrueWayback webpages to Archive Today
--ipWAYBACK_ENABLE_IPfalseWayback webpages to IPFS
--phWAYBACK_ENABLE_PHfalseWayback webpages to Telegra.ph, required Chrome/Chromium
--ipfs-hostWAYBACK_IPFS_HOST127.0.0.1IPFS daemon service host
-p, --ipfs-portWAYBACK_IPFS_PORT5001IPFS daemon service port
-m, --ipfs-modeWAYBACK_IPFS_MODEpinnerIPFS mode for preserve webpage, e.g. daemon, pinner
-WAYBACK_IPFS_TARGETweb3storageThe IPFS pinning service is used to store files, supported pinners: infura, pinata, nftstorage, web3storage.
-WAYBACK_IPFS_APIKEY-Apikey of the IPFS pinning service
-WAYBACK_IPFS_SECRET-Secret of the IPFS pinning service
-WAYBACK_GITHUB_TOKEN-GitHub Personal Access Token, required the repo scope
-WAYBACK_GITHUB_OWNER-GitHub account name
-WAYBACK_GITHUB_REPO-GitHub repository to publish results
-WAYBACK_NOTION_TOKEN-Notion integration token
-WAYBACK_NOTION_DATABASE_ID-Notion database ID for archiving results
-t, --tokenWAYBACK_TELEGRAM_TOKEN-Telegram Bot API Token
--chatidWAYBACK_TELEGRAM_CHANNEL-The Telegram public/private channel id to publish archive result
-WAYBACK_TELEGRAM_HELPTEXT-The help text for Telegram command
-WAYBACK_MASTODON_SERVER-Domain of Mastodon instance
-WAYBACK_MASTODON_KEY-The client key of your Mastodon application
-WAYBACK_MASTODON_SECRET-The client secret of your Mastodon application
-WAYBACK_MASTODON_TOKEN-The access token of your Mastodon application
-WAYBACK_TWITTER_CONSUMER_KEY-The customer key of your Twitter application
-WAYBACK_TWITTER_CONSUMER_SECRET-The customer secret of your Twitter application
-WAYBACK_TWITTER_ACCESS_TOKEN-The access token of your Twitter application
-WAYBACK_TWITTER_ACCESS_SECRET-The access secret of your Twitter application
-WAYBACK_IRC_NICK-IRC nick
-WAYBACK_IRC_PASSWORD-IRC password
-WAYBACK_IRC_CHANNEL-IRC channel
-WAYBACK_IRC_SERVERirc.libera.chat:6697IRC server, required TLS
-WAYBACK_MATRIX_HOMESERVERhttps://matrix.orgMatrix homeserver
-WAYBACK_MATRIX_USERID-Matrix unique user ID, format: @foo:example.com
-WAYBACK_MATRIX_ROOMID-Matrix internal room ID, format: !bar:example.com
-WAYBACK_MATRIX_PASSWORD-Matrix password
-WAYBACK_DISCORD_BOT_TOKEN-Discord bot authorization token
-WAYBACK_DISCORD_CHANNEL-Discord channel ID, find channel ID
-WAYBACK_DISCORD_HELPTEXT-The help text for Discord command
-WAYBACK_SLACK_APP_TOKEN-App-Level Token of Slack app
-WAYBACK_SLACK_BOT_TOKEN-Bot User OAuth Token for Slack workspace, use User OAuth Token if requires create external link
-WAYBACK_SLACK_CHANNEL-Channel ID of Slack channel
-WAYBACK_SLACK_HELPTEXT-The help text for Slack slash command
-WAYBACK_NOSTR_RELAY_URLwss://nostr.developer.liNostr relay server url, multiple separated by comma
-WAYBACK_NOSTR_PRIVATE_KEY-The private key of a Nostr account
--torWAYBACK_USE_TORfalseSnapshot webpage via Tor anonymity network
--tor-keyWAYBACK_TOR_PRIVKEY-The private key for Tor Hidden Service
-WAYBACK_TOR_LOCAL_PORT8964Local port for Tor Hidden Service, also support for a reverse proxy. This is ignored if WAYBACK_LISTEN_ADDR is set.
-WAYBACK_TOR_REMOTE_PORTS80Remote ports for Tor Hidden Service, e.g. WAYBACK_TOR_REMOTE_PORTS=80,81
-WAYBACK_SLOT-Pinning service for IPFS mode of pinner, see ipfs-pinner
-WAYBACK_APIKEY-API key for pinning service
-WAYBACK_SECRET-API secret for pinning service

If both of the definition file and environment variables are specified, they are all will be read and apply, and preferred from the environment variable for the same item.

Prints the resulting options of the targets with --print, in a Go struct with type, without running the wayback.

Docker/Podman

docker pull wabarc/wayback
docker run -d wabarc/wayback wayback -d telegram -t YOUR-BOT-TOKEN # without telegram channel
docker run -d wabarc/wayback wayback -d telegram -t YOUR-BOT-TOKEN -c YOUR-CHANNEL-USERNAME # with telegram channel

1-Click Deploy

Deploy Deploy to Render

Deployment

Documentation

For a comprehensive guide, please refer to the complete documentation.

Contributing

We encourage all contributions to this repository! Open an issue! Or open a Pull Request!

If you're interested in contributing to wayback itself, read our contributing guide to get started.

Note: All interaction here should conform to the Code of Conduct.


Supported Golang version: See .github/workflows/testing.yml


Download Details:

Author: Wabarc
Source Code: https://github.com/wabarc/wayback 
License: GPL-3.0 license

#heroku #screenshot #twitter #telegram #matrix #go #golang 

A Self-hosted Archiving Service integrated with Internet Archive
Lawson  Wehner

Lawson Wehner

1679490561

Docker-powered PaaS That Helps You Build, Manage The Lifecycle Of Apps

Dokku

Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.OpenCollective Sponsor 18

Requirements

A fresh VM running any of the following operating systems:

  • Ubuntu 20.04 / 22.04 x64 - Any currently supported release
  • Debian 10+ x64
  • Arch Linux x64 (experimental)

An SSH keypair that can be used for application deployment. If this exists before installation, it will be automatically imported into dokku. Otherwise, you will need to import the keypair manually after installation using dokku ssh-keys:add.

Installation

To install the latest stable release, run the following commands as a user who has access to sudo:

wget https://dokku.com/install/v0.30.2/bootstrap.sh
sudo DOKKU_TAG=v0.30.2 bash bootstrap.sh

You can then proceed to configure your server domain (via dokku domains:set-global) and user access (via dokku ssh-keys:add) to complete the installation.

If you wish for a more unattended installation method, see these docs.

Upgrade

View the docs for upgrading from an older version of Dokku.

Documentation

Full documentation - including advanced installation docs - are available online at https://dokku.com/docs/getting-started/installation/.

Support

You can use GitHub Issues, check Troubleshooting in the documentation, or join us on Gliderlabs Slack in the #dokku channel.

Contribution

After checking GitHub Issues, the Troubleshooting Guide or having a chat with us on Gliderlabs Slack in the #dokku channel, feel free to fork and create a Pull Request.

While we may not merge your PR as is, they serve to start conversations and improve the general Dokku experience for all users.


Download Details:

Author: Dokku
Source Code: https://github.com/dokku/dokku 
License: MIT license

#heroku #docker #kubernetes #devops #containers 

Docker-powered PaaS That Helps You Build, Manage The Lifecycle Of Apps

Deploy a React App to Heroku

Deploy a React App to Heroku

Introduction

When a developer creates an application, the next step is to share it with friends or the public so that everyone can access it. That process of transferring code from a development environment to a hosting platform where it is served to end users is called deployment.

Hosting used to be pretty inefficient before cloud hosting platforms like Heroku came around. It was mainly done by hosting providers who required uploading all static assets (build files generated by running npm run build) every time we make a change. There was no other way to upload static files other than some sort of FTP interface (either a local one or on the hosting server), which can be pretty stressful and technical.

In this guide, we'll take a look at how to deploy a React application to Heroku using the CLI (Command Line Interface) via Heroku Git. Also, we will take a look at how to redeploy code when we make some changes to our application.

What Is Heroku and Why Use It?

Heroku is a container-based cloud platform that enables developers to easily deploy, manage, and scale modern applications. This allows developers to focus on their core job - creating great apps that delight and engage users. In other words, Heroku increases the developer's productivity by making app deployment, scaling, and management as simple as possible.

There are numerous reasons why we should use Heroku:

  • Supports multiple languages - from the ground up, the Heroku platform supports more than eight languages, including Node, Java, and Python.
  • Supports several databases and data stores - Heroku enables developers to select from a variety of databases and data stores based on the specific requirements of individual applications - Postgres SQL, MySQL, MongoDB, and so on.
  • Less expensive - creating and hosting a static website will save us money in the long run.

Getting Started

In this guide, we will deploy a movies search app, which is a simple React app that searches an API for movies. Before we begin, you should sign up for Heroku if you do not already have an account, as this is where we will deploy our React application. We can go to Heroku.com and sign up by clicking the sign-up button in the upper right corner. The signup pipeline is pretty much the standard one, so you shouldn't have any trouble creating an account on Heroku:

heroku account creation

When you've created a Heroku account, we can proceed to the actual deployment of our app.

Note: Previously, there was an option to deploy via GitHub Integration, but that feature has been revoked due to a security breach. The best way to deploy to Heroku as of now is via Heroku Git, which happens in our CLI (Command Line Interface).

Deployment With Heroku Git

Heroku uses the Git version control system to manage app deployments. It is important to note that we do not need to be Git experts to deploy our React application to Heroku, all we need to know are some fundamentals, which will be covered in this guide.

If you're not confident with Git - don't worry. We'll cover everything you need to know. Otherwise, check out our free course on Git: Git Essentials: Developer's Guide to Git

Before We Start

As the name Heroku Git implies, we will be using Git, which means we need to have Git installed. The same applies to the Heroku CLI. If you don't have those two installed, you can follow the following instructions to guide you through the installation process:

After successfully installing them, we can proceed to create an app on Heroku, to which our React application will be deployed later. We can create an application on Heroku in two ways - via the terminal (CLI command) or manually on our Heroku dashboard.

Note: A common misconception is that Git and GitHub are the same things, but they are not! Git is a version control system used by many apps and services, including but not limited to GitHub. Therefore you don’t need to push your code to GitHub, nor have a GitHub account to use Heroku.

How to Create Heroku App Manually

Let’s first see how we can create an app using the Heroku dashboard. The first step is to click the create new app button:

creating an app on heroku

 

This would redirect us to a page where we need to fill up the information about the app we want to create:

filling out app information on heroku

Note: Make sure you remember the name of the app you created on Heroku because we will be connecting our local repository to this remote repository soon.

Once this process is completed, we can start deploying our app from a local environment to Heroku. But, before we take a look at how to deploy an app, let's consider an alternative approach to creating a Heroku app - using the Heroku CLI.

How to Create Heroku App via CLI

Alternatively, you can create an app on Heroku using the CLI. Heroku made sure this is as straightforward as possible. The only thing you need to do is to run the following command in your terminal of choice (just make sure to replace <app-name> with the actual name of your app):

$ heroku create -a <app-name>

Note: If you run this command from the app’s root directory, the empty Heroku Git repository is automatically set as a remote for our local repository.

How to Push Code to Heroku

The first step before pushing the code to Heroku will be to position yourself in the root directory of your app (in the terminal). Then use the heroku login command to log into the Heroku dashboard. After that, you need to accept Heroku's terms and conditions and, finally, log in to Heroku using your login credentials:

logging into heroku via cli

You will be returned to the terminal afterward, so you can continue the process of deploying to Heroku. Now, you should initialize the repository:

$ git init

And then register the app we created earlier on Heroku as the remote repository for the local one we initialized in the previous step:

$ heroku git:remote -a <app-name>

Note: Make sure to replace <app-name> with the name of the app we've created on Heroku earlier (e.g. movies-search-app).

Now we can proceed to deploy our application. But, since we need to deploy a React application, we first need to add the React buildpack:

$ heroku buildpacks:set mars/create-react-app

Once that is completed, the next step is to actually push our code to the remote repository we've created on Heroku. The first step is to stage our files, commit them, and finally push them to the remote repository:

$ git commit -am "my commit"
$ git push heroku main

Note: Suppose we want to switch our branch from main to development. We can run the following command: git checkout -b development.

Once we have successfully pushed to Heroku, we can open our newly deployed app in our browser:

$ heroku open

deploy and open heroku application

How to Update Our Deployment

The next question you'd probably have is how to redeploy the app after we make changes to it. This works similarly to how it does in any Git-based platform - all we have to do is stage the files, commit, and then push the code to Heroku:

$ git commit -am "added changes"
$ git push heroku main

Heroku automatically picks this change up and applies it to the live application.

Conclusion

Heroku can be a fairly useful tool for deploying your React app. In this article, we've taken a look at how to deploy a React application to Heroku using Heroku Git. Additionally, we've gone over some basic Git commands you would need when working with Heroku Git, and, finally, we've discussed how to redeploy an app after you make changes to it.

Original article source at: https://stackabuse.com

#react #heroku #javascript #devops 

Deploy a React App to Heroku
Lawrence  Lesch

Lawrence Lesch

1677727920

REST API Boilerplate using NodeJS and KOA2, Typescript

Node - Koa - Typescript Project 

The main purpose of this repository is to build a good project setup and workflow for writing a Node api rest in TypeScript using KOA and an SQL DB.

Koa is a new web framework designed by the team behind Express, which aims to be a smaller, more expressive, and more robust foundation for web applications and APIs. Through leveraging generators Koa allows you to ditch callbacks and greatly increase error-handling. Koa does not bundle any middleware within core, and provides an elegant suite of methods that make writing servers fast and enjoyable.

Through Github Actions CI, this boilerplate is deployed here! You can try to make requests to the different defined endpoints and see how it works. The following Authorization header will have to be set (already signed with the boilerplate's secret) to pass the JWT middleware:

HEADER (DEMO)

Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjEiLCJuYW1lIjoiSmF2aWVyIEF2aWxlcyIsImVtYWlsIjoiYXZpbGVzbG9wZXouamF2aWVyQGdtYWlsLmNvbSJ9.7oxEVGy4VEtaDQyLiuoDvzdO0AyrNrJ_s9NU3vko5-k

AVAILABLE ENDPOINTS DEMO SWAGGER DOCS DEMO

When running the project locally with watch-server, being .env file config the very same as .example.env file, the swagger docs will be deployed at: http:localhost:3000/swagger-html, and the bearer token for authorization should be as follows:

HEADER (LOCALHOST BASED ON DEFAULT SECRET KEY 'your-secret-whatever')

Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjEiLCJuYW1lIjoiSmF2aWVyIEF2aWxlcyIsImVtYWlsIjoiYXZpbGVzbG9wZXouamF2aWVyQGdtYWlsLmNvbSJ9.rgOobROftUYSWphkdNfxoN2cgKiqNXd4Km4oz6Ex4ng
methodresourcedescription
GET/Simple hello world response
GET/usersreturns the collection of users present in the DB
GET/users/:idreturns the specified id user
POST/userscreates a user in the DB (object user to be includued in request's body)
PUT/users/:idupdates an already created user in the DB (object user to be includued in request's body)
DELETE/users/:iddeletes a user from the DB (JWT token user ID must be the same as the user you want to delete)

Pre-reqs

To build and run this app locally you will need:

Features:

  • Nodemon - server auto-restarts when code changes
  • Koa v2
  • TypeORM (SQL DB) with basic CRUD included
  • Swagger decorator (auto generated swagger docs)
  • Class-validator - Decorator based entities validation
  • Docker-compose ready to go
  • Postman (newman) integration tests
  • Locust load tests
  • Jest unit tests
  • Github actions - CI for building and testing the project
  • Cron jobs prepared

Included middleware:

  • @koa/router
  • koa-bodyparser
  • Winston Logger
  • JWT auth koa-jwt
  • Helmet (security headers)
  • CORS

Getting Started

  • Clone the repository
git clone --depth=1 https://github.com/javieraviles/node-typescript-koa-rest.git <project_name>
  • Install dependencies
cd <project_name>
npm install
  • Run the project directly in TS
npm run watch-server
  • Build and run the project in JS
npm run build
npm run start
  • Run integration or load tests
npm run test:integration:local (newman needed)
npm run test:load (locust needed)
  • Run unit tests
npm run test
  • Run unit tests with coverage
npm run test:coverage
  • Run unit tests on Jest watch mode
npm run test:watch

Docker (optional)

A docker-compose file has been added to the project with a postgreSQL (already setting user, pass and dbname as the ORM config is expecting) and an ADMINER image (easy web db client).

It is as easy as go to the project folder and execute the command 'docker-compose up' once you have Docker installed, and both the postgreSQL server and the Adminer client will be running in ports 5432 and 8080 respectively with all the config you need to start playing around.

If you use Docker natively, the host for the server which you will need to include in the ORM configuration file will be localhost, but if you were to run Docker in older Windows versions, you will be using Boot2Docker and probably your virtual machine will use your ip 192.168.99.100 as network adapter (if not, command docker-machine ip will tell you). This mean your database host will be the aforementioned ip and in case you want to access the web db client you will also need to go to http://192.168.99.100/8080

Setting up the Database - ORM

This API is prepared to work with an SQL database, using TypeORM. In this case we are using postgreSQL, and that is why in the package.json 'pg' has been included. If you where to use a different SQL database remember to install the correspondent driver.

The ORM configuration and connection to the database can be specified in the file 'ormconfig.json'. Here is directly in the connection to the database in 'server.ts' file because a environment variable containing databaseUrl is being used to set the connection data. This is prepared for Heroku, which provides a postgres-string-connection as env variable. In local is being mocked with the docker local postgres as can be seen in ".example.env"

It is importante to notice that, when serving the project directly with *.ts files using ts-node,the configuration for the ORM should specify the *.ts files path, but once the project is built (transpiled) and run as plain js, it will be needed to change it accordingly to find the built js files:

"entities": [
      "dist/entity/**/*.js"
   ],
   "migrations": [
      "dist/migration/**/*.js"
   ],
   "subscribers": [
      "dist/subscriber/**/*.js"
   ]

**NOTE: this is now automatically handled by the NODE_ENV variable too.

Notice that if NODE_ENV is set to development, the ORM config won't be using SSL to connect to the DB. Otherwise it will.

And because Heroku uses self-signed certificates, this bit has been added, please take it out if connecting to a local DB without SSL.

createConnection({
    ...
    extra: {
        ssl: {
            rejectUnauthorized: false // Heroku uses self signed certificates
        }
    }
 })

You can find an implemented CRUD of the entity user in the correspondent controller controller/user.ts and its routes in routes.ts file.

Entities validation

This project uses the library class-validator, a decorator-based entity validation, which is used directly in the entities files as follows:

export class User {
    @Length(10, 100) // length of string email must be between 10 and 100 characters
    @IsEmail() // the string must comply with an standard email format
    @IsNotEmpty() // the string can't be empty
    email: string;
}

Once the decorators have been set in the entity, you can validate from anywhere as follows:

const user = new User();
user.email = "avileslopez.javier@gmail"; // should not pass, needs the ending .com to be a valid email

validate(user).then(errors => { // errors is an array of validation errors
    if (errors.length > 0) {
        console.log("validation failed. errors: ", errors); // code will get here, printing an "IsEmail" error
    } else {
        console.log("validation succeed");
    }
});

For further documentation regarding validations see class-validator docs.

Environment variables

Create a .env file (or just rename the .example.env) containing all the env variables you want to set, dotenv library will take care of setting them. This project is using three variables at the moment:

  • PORT -> port where the server will be started on, Heroku will set this env variable automatically
  • NODE_ENV -> environment, development value will set the logger as debug level, also important for CI. In addition will determine if the ORM connects to the DB through SSL or not.
  • JWT_SECRET -> secret value, JWT tokens should be signed with this value
  • DATABASE_URL -> DB connection data in connection-string format.

Getting TypeScript

TypeScript itself is simple to add to any project with npm.

npm install -D typescript

If you're using VS Code then you're good to go! VS Code will detect and use the TypeScript version you have installed in your node_modules folder. For other editors, make sure you have the corresponding TypeScript plugin.

Project Structure

The most obvious difference in a TypeScript + Node project is the folder structure. TypeScript (.ts) files live in your src folder and after compilation are output as JavaScript (.js) in the dist folder.

The full folder structure of this app is explained below:

Note! Make sure you have already built the app using npm run build

NameDescription
distContains the distributable (or output) from your TypeScript build. This is the code you ship
node_modulesContains all your npm dependencies
srcContains your source code that will be compiled to the dist dir
src/server.tsEntry point to your KOA app
.github/workflows/ci.ymlGithub actions CI configuration
loadtests/locustfile.pyLocust load tests
integrationtests/node-koa-typescript.postman_collection.jsonPostman integration test collection
.copyStaticAssets.tsBuild script that copies images, fonts, and JS libs to the dist folder
package.jsonFile that contains npm dependencies as well as build scripts
docker-compose.ymlDocker PostgreSQL and Adminer images in case you want to load the db from Docker
tsconfig.jsonConfig settings for compiling server code written in TypeScript
.eslintrc and .eslintignoreConfig settings for ESLint code style checking
.example.envEnv variables file example to be renamed to .env
Dockerfile and dockerignoreThe app is dockerized to be deployed from CI in a more standard way, not needed for dev

Configuring TypeScript compilation

TypeScript uses the file tsconfig.json to adjust project compile options. Let's dissect this project's tsconfig.json, starting with the compilerOptions which details how your project is compiled.

    "compilerOptions": {
        "module": "commonjs",
        "target": "es2017",
        "lib": ["es6"],
        "noImplicitAny": true,
        "strictPropertyInitialization": false,
        "moduleResolution": "node",
        "sourceMap": true,
        "outDir": "dist",
        "baseUrl": ".",
        "experimentalDecorators": true,
        "emitDecoratorMetadata": true,  
        }
    },
compilerOptionsDescription
"module": "commonjs"The output module type (in your .js files). Node uses commonjs, so that is what we use
"target": "es2017"The output language level. Node supports ES2017, so we can target that here
"lib": ["es6"]Needed for TypeORM.
"noImplicitAny": trueEnables a stricter setting which throws errors when something has a default any value
"moduleResolution": "node"TypeScript attempts to mimic Node's module resolution strategy. Read more here
"sourceMap": trueWe want source maps to be output along side our JavaScript.
"outDir": "dist"Location to output .js files after compilation
"baseUrl": "."Part of configuring module resolution.
paths: {...}Part of configuring module resolution.
"experimentalDecorators": trueNeeded for TypeORM. Allows use of @Decorators
"emitDecoratorMetadata": trueNeeded for TypeORM. Allows use of @Decorators

The rest of the file define the TypeScript project context. The project context is basically a set of options that determine which files are compiled when the compiler is invoked with a specific tsconfig.json. In this case, we use the following to define our project context:

    "include": [
        "src/**/*"
    ]

include takes an array of glob patterns of files to include in the compilation. This project is fairly simple and all of our .ts files are under the src folder. For more complex setups, you can include an exclude array of glob patterns that removes specific files from the set defined with include. There is also a files option which takes an array of individual file names which overrides both include and exclude.

Running the build

All the different build steps are orchestrated via npm scripts. Npm scripts basically allow us to call (and chain) terminal commands via npm. This is nice because most JavaScript tools have easy to use command line utilities allowing us to not need grunt or gulp to manage our builds. If you open package.json, you will see a scripts section with all the different scripts you can call. To call a script, simply run npm run <script-name> from the command line. You'll notice that npm scripts can call each other which makes it easy to compose complex builds out of simple individual build scripts. Below is a list of all the scripts this template has available:

Npm ScriptDescription
startDoes the same as 'npm run serve'. Can be invoked with npm start
buildFull build. Runs ALL build tasks (build-ts, lint, copy-static-assets)
serveRuns node on dist/server/server.js which is the apps entry point
watch-serverNodemon, process restarts if crashes. Continuously watches .ts files and re-compiles to .js
build-tsCompiles all source .ts files to .js files in the dist folder
lintRuns ESLint check and fix on project files
copy-static-assetsCalls script that copies JS libs, fonts, and images to dist directory
test:integration:<env>Execute Postman integration tests collection using newman on any env (local or heroku)
test:loadExecute Locust load tests using a specific configuration

CI: Github Actions

Using Github Actions a pipeline is deploying the application in Heroku and running tests against it, checking the application is healthy deployed. The pipeline can be found at /.github/workflows/test.yml. This performs the following:

  • Build the project
    • Install Node
    • Install dependencies
    • Build the project (transpile to JS)
    • Run unit tests
  • Deploy to Heroku
    • Install Docker cli
    • Build the application container
    • Install Heroku cli
    • Login into Heroku
    • Push Docker image to Heroku
    • Trigger release in Heroku
  • Run integration tests
    • Install Node
    • Install Newman
    • Run Postman collection using Newman against deployed app in Heroku
  • Run load tests
    • Install Python
    • Install Locust
    • Run Locust load tests against deployed app in Heroku

ESLint

Since TSLint is deprecated now, ESLint feels like the way to go as also supports typescript. ESLint is a static code analysis tool for identifying problematic patterns found in JavaScript/typescript code.

ESLint rules

Like most linters, ESLint has a wide set of configurable rules as well as support for custom rule sets. All rules are configured through .eslintrc. In this project, we are using a fairly basic set of rules with no additional custom rules.

Running ESLint

Like the rest of our build steps, we use npm scripts to invoke ESLint. To run ESLint you can call the main build script or just the ESLint task.

npm run build   // runs full build including ESLint format check
npm run lint    // runs ESLint check + fix

Notice that ESLint is not a part of the main watch task. It can be annoying for ESLint to clutter the output window while in the middle of writing a function, so I elected to only run it only during the full build. If you are interested in seeing ESLint feedback as soon as possible, I strongly recommend the ESLint extension in VS Code.

Register cron jobs

Cron dependency has been added to the project together with types. A cron.ts file has been created where a cron job is created using a cron expression configured in config.ts file.

import { CronJob } from 'cron';
import { config } from './config';

const cron = new CronJob(config.cronJobExpression, () => {
    console.log('Executing cron job once every hour');
});

export { cron };

From the server.ts, the cron job gets started:

import { cron } from './cron';
// Register cron job to do any action needed
cron.start();

Integrations and load tests

Integrations tests are a Postman collection with assertions, which gets executed using Newman from the CI (Github Actions). It can be found at /integrationtests/node-koa-typescript.postman_collection.json; it can be opened in Postman and get modified very easily. Feel free to install Newman in your local environment and trigger npm run test:integration:local command which will use local environment file (instead of heroku dev one) to trigger your postman collection faster than using postman.

Load tests are a locust file with assertions, which gets executed from the CI (Github Actions). It can be found at /loadtests/locustfile.py; It is written in python and can be executed locally against any host once python and locust are installed on your dev machine.

**NOTE: at the end of load tests, an endpoint to remove all created test users is called.

Logging

Winston is designed to be a simple and universal logging library with support for multiple transports.

A "logger" middleware passing a winstonInstance has been created. Current configuration of the logger can be found in the file "logger.ts". It will log 'error' level to an error.log file and 'debug' or 'info' level (depending on NODE_ENV environment variable, debug if == development) to the console.

// Logger middleware -> use winston as logger (logger.ts with config)
app.use(logger(winston));

Authentication - Security

The idea is to keep the API as clean as possible, therefore the auth will be done from the client using an auth provider such as Auth0. The client making requests to the API should include the JWT in the Authorization header as "Authorization: Bearer ". HS256 will be used as the secret will be known by both your api and your client and will be used to sign the token, so make sure you keep it hidden.

As can be found in the server.ts file, a JWT middleware has been added, passing the secret from an environment variable. The middleware will validate that every request to the routes below, MUST include a valid JWT signed with the same secret. The middleware will set automatically the payload information in ctx.state.user.

// JWT middleware -> below this line, routes are only reached if JWT token is valid, secret as env variable
app.use(jwt({ secret: config.jwtSecret }));

Go to the website https://jwt.io/ to create JWT tokens for testing/debugging purposes. Select algorithm HS256 and include the generated token in the Authorization header to pass through the jwt middleware.

Custom 401 handling -> if you don't want to expose koa-jwt errors to users:

app.use(function(ctx, next){
  return next().catch((err) => {
    if (401 == err.status) {
      ctx.status = 401;
      ctx.body = 'Protected resource, use Authorization header to get access\n';
    } else {
      throw err;
    }
  });
});

If you want to authenticate from the API, and you fancy the idea of an auth provider like Auth0, have a look at jsonwebtoken — JSON Web Token signing and verification

CORS

This boilerplate uses @koa/cors, a simple CORS middleware for koa. If you are not sure what this is about, click here.

// Enable CORS with default options
app.use(cors());

Have a look at Official @koa/cors docs in case you want to specify 'origin' or 'allowMethods' properties.

Helmet

This boilerplate uses koa-helmet, a wrapper for helmet to work with koa. It provides important security headers to make your app more secure by default.

Usage is the same as helmet. Helmet offers 11 security middleware functions (clickjacking, DNS prefetching, Security Policy...), everything is set by default here.

// Enable helmet with default options
app.use(helmet());

Have a look at Official koa-helmet docs in case you want to customize which security middlewares are enabled.

Dependencies

Dependencies are managed through package.json. In that file you'll find two sections:

dependencies

PackageDescription
dotenvLoads environment variables from .env file.
koaNode web framework.
koa-bodyparserA bodyparser for koa.
koa-jwtMiddleware to validate JWT tokens.
@koa/routerRouter middleware for koa.
koa-helmetWrapper for helmet, important security headers to make app more secure
@koa/corsCross-Origin Resource Sharing(CORS) for koa
pgPostgreSQL driver, needed for the ORM.
reflect-metadataUsed by typeORM to implement decorators.
typeormA very cool SQL ORM.
winstonLogging library.
class-validatorDecorator based entities validation.
koa-swagger-decoratorusing decorator to automatically generate swagger doc for koa-router.
cronRegister cron jobs in node.

devDependencies

PackageDescription
@typesDependencies in this folder are .d.ts files used to provide types
nodemonUtility that automatically restarts node process when it crashes
ts-nodeEnables directly running TS files. Used to run copy-static-assets.ts
eslintLinter for Javascript/TypeScript files
typescriptJavaScript compiler/type checker that boosts JavaScript productivity
shelljsPortable Unix shell commands for Node.js

To install or update these dependencies you can use npm install or npm update.

Changelog

1.8.0

  • Unit tests included using Jest (Thanks to @rafapaezbas)
  • Upgrade all dependencies
  • Upgrade to Node 14

1.7.1

  • Upgrading Locust + fixing load tests
  • Improving Logger

1.7.0

  • Migrating TSLint (deprecated already) to ESLint
  • Node version upgraded from 10.x.x to 12.0.0 (LTS)
  • Now CI installs from package-lock.json using npm ci (Beyond guaranteeing you that you'll only get what is in your lock-file it's also much faster (2x-10x!) than npm install when you don't start with a node_modules).
  • included integraton test using Newman for local env too
  • koa-router deprecated, using new fork from koa team @koa/router
  • Dependencies updated, some @types removed as more and more libraries include their own types now!
  • Typescript to latest

1.6.1

  • Fixing CI
  • Improving integration tests robustness

1.6.0

  • CI migrated from Travis to Github actions
  • cron dependency -> register cron jobs
  • Node app dockerized -> now is directly pushed as a docker image to Heroku from CI, not using any webhook
  • Added postman integration tests, executed from Github actions CI using Newman
  • Added locust load tests, executed from Github actions CI
  • PRs merged: 47, 48 and 49. Thanks to everybody!

1.5.0

  • koa-swagger-decorator -> generate swagger docs with decorators in the endpoints
  • Split routes into protected and unprotected. Hello world + swagger docs are not proteted by jwt
  • some dependencies have been updated

1.4.2

  • Fix -> npm run watch-server is now working properly live-reloading changes in the code Issue 39.
  • Fix -> Logging levels were not correctly mapped. Thanks to @atamano for the PR Pull Request 35
  • Some code leftovers removed

1.4.1

  • Fix -> After updating winston to 3.0.0, it was throwing an error when logging errors into file
  • Fix -> Config in config.ts wasn't implementing IConfig interface

1.4.0

  • Dotenv lib updated, no changes needed (they are dropping node4 support)
  • Class-validator lib updated, no chages needed (cool features added like IsPhoneNumber or custom context for decorators)
  • Winston lib updated to 3.0.0, some amendments needed to format the console log. Removed the @types as Winston now supports Typescript natively!
  • Some devDependencies updated as well

1.3.0

  • CORS added
  • Syntax full REST
  • Some error handling improvement

1.2.0

  • Heroku deployment added

1.1.0

  • Added Helmet for security
  • Some bad practices await/async fixed

Download Details:

Author: javieraviles
Source Code: https://github.com/javieraviles/node-typescript-koa-rest 
License: MIT license

#typescript #heroku #docker #jwt #node 

REST API Boilerplate using NodeJS and KOA2, Typescript
Bongani  Ngema

Bongani Ngema

1676405460

Packtpub-crawler: Download Your Daily Free Packt Publishing eBook

Packtpub-crawler

This crawler automates the following step:

  • access to private account
  • claim the daily free eBook and weekly Newsletter
  • parse title, description and useful information
  • download favorite format .pdf .epub .mobi
  • download source code and book cover
  • upload files to Google Drive, OneDrive or via scp
  • store data on Firebase
  • notify via Gmail, IFTTT, Join or Pushover (on success and errors)
  • schedule daily job on Heroku or with Docker

Default command

# upload pdf to googledrive, store data and notify via email
python script/spider.py -c config/prod.cfg -u googledrive -s firebase -n gmail

Other options

# download all format
python script/spider.py --config config/prod.cfg --all

# download only one format: pdf|epub|mobi
python script/spider.py --config config/prod.cfg --type pdf

# download also additional material: source code (if exists) and book cover
python script/spider.py --config config/prod.cfg -t pdf --extras
# equivalent (default is pdf)
python script/spider.py -c config/prod.cfg -e

# download and then upload to Google Drive (given the download url anyone can download it)
python script/spider.py -c config/prod.cfg -t epub --upload googledrive
python script/spider.py --config config/prod.cfg --all --extras --upload googledrive

# download and then upload to OneDrive (given the download url anyone can download it)
python script/spider.py -c config/prod.cfg -t epub --upload onedrive
python script/spider.py --config config/prod.cfg --all --extras --upload onedrive

# download and notify: gmail|ifttt|join|pushover
python script/spider.py -c config/prod.cfg --notify gmail

# only claim book (no downloads):
python script/spider.py -c config/prod.cfg --notify gmail --claimOnly

Basic setup

Before you start you should

  • Verify that your currently installed version of Python is 2.x with python --version
  • Clone the repository git clone https://github.com/niqdev/packtpub-crawler.git
  • Install all the dependencies pip install -r requirements.txt (see also virtualenv)
  • Create a config file cp config/prod_example.cfg config/prod.cfg
  • Change your Packtpub credentials in the config file
[credential]
credential.email=PACKTPUB_EMAIL
credential.password=PACKTPUB_PASSWORD

Now you should be able to claim and download your first eBook

python script/spider.py --config config/prod.cfg

Google Drive

From the documentation, Google Drive API requires OAuth2.0 for authentication, so to upload files you should:

  • Go to Google APIs Console and create a new Google Drive project named PacktpubDrive
  • On API manager > Overview menu
    • Enable Google Drive API
  • On API manager > Credentials menu
    • In OAuth consent screen tab set PacktpubDrive as the product name shown to users
    • In Credentials tab create credentials of type OAuth client ID and choose Application type Other named PacktpubDriveCredentials
  • Click Download JSON and save the file config/client_secrets.json
  • Change your Google Drive credentials in the config file
[googledrive]
...
googledrive.client_secrets=config/client_secrets.json
googledrive.gmail=GOOGLE_DRIVE@gmail.com

Now you should be able to upload your eBook to Google Drive

python script/spider.py --config config/prod.cfg --upload googledrive

Only the first time you will be prompted to login in a browser which has javascript enabled (no text-based browser) to generate config/auth_token.json. You should also copy and paste in the config the FOLDER_ID, otherwise every time a new folder with the same name will be created.

[googledrive]
...
googledrive.default_folder=packtpub
googledrive.upload_folder=FOLDER_ID

Documentation: OAuth, Quickstart, example and permissions

OneDrive

From the documentation, OneDrive API requires OAuth2.0 for authentication, so to upload files you should:

  • Go to the Microsoft Application Registration Portal.
  • When prompted, sign in with your Microsoft account credentials.
  • Find My applications and click Add an app.
  • Enter PacktpubDrive as the app's name and click Create application.
  • Scroll to the bottom of the page and check the Live SDK support box.
  • Change your OneDrive credentials in the config file
    • Copy your Application Id into the config file to onedrive.client_id
    • Click Generate New Password and copy the password shown into the config file to onedrive.client_secret
    • Click Add Platform and select Web
    • Enter http://localhost:8080/ as the Redirect URL
    • Click Save at the bottom of the page
[onedrive]
...
onedrive.client_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
onedrive.client_secret=XxXxXxXxXxXxXxXxXxXxXxX

Now you should be able to upload your eBook to OneDrive

python script/spider.py --config config/prod.cfg --upload onedrive

Only the first time you will be prompted to login in a browser which has javascript enabled (no text-based browser) to generate config/session.onedrive.pickle.

[onedrive]
...
onedrive.folder=packtpub

Documentation: Registration, Python API

Scp

To upload your eBook via scp on a remote server update the configs

[scp]
scp.host=SCP_HOST
scp.user=SCP_USER
scp.password=SCP_PASSWORD
scp.path=SCP_UPLOAD_PATH

Now you should be able to upload your eBook

python script/spider.py --config config/prod.cfg --upload scp

Note:

  • the destination folder scp.path on the remote server must exists in advance
  • the option --upload scp is incompatible with --store and --notify

Firebase

Create a new Firebase project, copy the database secret from your settings

https://console.firebase.google.com/project/PROJECT_NAME/settings/database

and update the configs

[firebase]
firebase.database_secret=DATABASE_SECRET
firebase.url=https://PROJECT_NAME.firebaseio.com

Now you should be able to store your eBook details on Firebase

python script/spider.py --config config/prod.cfg --upload googledrive --store firebase

Gmail notification

To send a notification via email using Gmail you should:

[gmail]
...
gmail.username=EMAIL_USERNAME@gmail.com
gmail.password=EMAIL_PASSWORD
gmail.from=FROM_EMAIL@gmail.com
gmail.to=TO_EMAIL_1@gmail.com,TO_EMAIL_2@gmail.com

Now you should be able to notify your accounts

python script/spider.py --config config/prod.cfg --notify gmail

IFTTT notification

  • Get an account on IFTTT
  • Go to your Maker settings and activate the channel
  • Create a new applet using the Maker service with the trigger "Receive a web request" and the event name "packtpub-crawler"
  • Change your IFTTT key in the config file
[ifttt]
ifttt.event_name=packtpub-crawler
ifttt.key=IFTTT_MAKER_KEY

Now you should be able to trigger the applet

python script/spider.py --config config/prod.cfg --notify ifttt

Value mappings:

  • value1: title
  • value2: description
  • value3: landing page URL

Join notification

  • Get the Join Chrome extension and/or App
  • You can find your device ids here
  • (Optional) You can use multiple devices or groups (group.all, group.android, group.chrome, group.windows10, group.phone, group.tablet, group.pc) separated by comma
  • Change your Join credentials in the config file
[join]
join.device_ids=DEVICE_IDS_COMMA_SEPARATED_OR_GROUP_NAME
join.api_key=API_KEY

Now you should be able to trigger the event

python script/spider.py --config config/prod.cfg --notify join

Pushover notification

[pushover]
pushover.user_key=PUSHOVER_USER_KEY
pushover.api_key=PUSHOVER_API_KEY

Heroku

Create a new branch

git checkout -b heroku-scheduler

Update the .gitignore and commit your changes

# remove config/prod.cfg config/client_secrets.json config/auth_token.json # add dev/ config/dev.cfg config/prod_example.cfg

Create, config and deploy the scheduler

heroku login # create a new app heroku create APP_NAME --region eu # or if you already have an existing app heroku git:remote -a APP_NAME # deploy your app git push -u heroku heroku-scheduler:master heroku ps:scale clock=1 # useful commands heroku ps heroku logs --ps clock.1 heroku logs --tail heroku run bash

Update script/scheduler.py with your own preferences.

More info about Heroku Scheduler, Clock Processes, Add-on and APScheduler

Docker

Build your image

docker build -t niqdev/packtpub-crawler:2.4.0 .

Run manually

docker run \
  --rm \
  --name my-packtpub-crawler \
  niqdev/packtpub-crawler:2.4.0 \
  python script/spider.py --config config/prod.cfg

Run scheduled crawler in background

docker run \
  --detach \
  --name my-packtpub-crawler \
  niqdev/packtpub-crawler:2.4.0

# useful commands
docker exec -i -t my-packtpub-crawler bash
docker logs -f my-packtpub-crawler

Alternatively you can pull from Docker Hub this fork

docker pull kuchy/packtpub-crawler

Cron job

Add this to your crontab to run the job daily at 9 AM:

crontab -e

00 09 * * * cd PATH_TO_PROJECT/packtpub-crawler && /usr/bin/python script/spider.py --config config/prod.cfg >> /tmp/packtpub.log 2>&1

Systemd service

Create two files in /etc/systemd/system:

  1. packtpub-crawler.service
[Unit]
Description=run packtpub-crawler

[Service]
User=USER_THAT_SHOULD_RUN_THE_SCRIPT
ExecStart=/usr/bin/python2.7 PATH_TO_PROJECT/packtpub-crawler/script/spider.py -c config/prod.cfg

[Install]
WantedBy=multi-user.target
  1. packtpub-crawler.timer
[Unit]
Description=Runs packtpub-crawler every day at 7

[Timer]
OnBootSec=10min
OnActiveSec=1s
OnCalendar=*-*-* 07:00:00
Unit=packtpub_crawler.service
Persistent=true

[Install]
WantedBy=multi-user.target

Enable the script with sudo systemctl enable packtpub_crawler.timer. You can test the service with sudo systemctl start packtpub_crawler.timer and see the output with sudo journalctl -u packtpub_crawler.service -f.

Newsletter

The script downloads also the free ebooks from the weekly packtpub newsletter. The URL is generated by a Google Apps Script which parses all the mails. You can get the code here, if you want to see the actual script, please clone the spreadsheet and go to Tools > Script editor....

To use your own source, modify in the config

url.bookFromNewsletter=https://goo.gl/kUciut

The URL should point to a file containing only the URL (no semicolons, HTML, JSON, etc).

You can also clone the spreadsheet to use your own Gmail account. Subscribe to the newsletter (on the bottom of the page) and create a filter to tag your mails accordingly.

Troubleshooting

  • ImportError: No module named paramiko

Install paramiko with sudo -H pip install paramiko --ignore-installed

  • Failed building wheel for cryptography

Install missing dependencies as described here

virtualenv

# install pip + setuptools
curl https://bootstrap.pypa.io/get-pip.py | python -

# upgrade pip
pip install -U pip

# install virtualenv globally 
sudo pip install virtualenv

# create virtualenv
virtualenv env

# activate virtualenv
source env/bin/activate

# verify virtualenv
which python
python --version

# deactivate virtualenv
deactivate

Development (only for spidering)

Run a simple static server with

node dev/server.js

and test the crawler with

python script/spider.py --dev --config config/dev.cfg --all

Disclaimer

This project is just a Proof of Concept and not intended for any illegal usage. I'm not responsible for any damage or abuse, use it at your own risk.

Download FREE eBook every day from www.packtpub.com

Download Details:

Author: Niqdev
Source Code: https://github.com/niqdev/packtpub-crawler 
License: MIT license

#firebase #heroku #docker #onedrive #googledrive 

Packtpub-crawler: Download Your Daily Free Packt Publishing eBook

Heroku to Deploy Golang Application

Introduction to Heroku Cloud

Heroku is a popular platform-as-a-service (PaaS) that allows developers to deploy and run applications on the cloud. It supports multiple programming languages, including Go, making it easy for developers to build and deploy their Go applications. In this article, we will discuss how to use Heroku to deploy Golang application.

Step-by-step Guide to Use Heroku to Deploy Golang Application

To deploy Golang app on Heroku cloud platform, follow the steps mentioned below. Before you begin, here are the pre-requirements that you must consider.

Prerequisites before using Heroku to deploy Golang application

  • Basic knowledge of Golang and Git.
  • Install git in your system if not already installed because Heroku will depend on it for deployment.
  • If your project is unavailable in Git, then commit your application changes in git.

Step 1: Set Up a Go Development Environment

To get started, you need to have a Go development environment set up on your local machine. You can download and install Go from the official website.

Step 2: Initialize a Go Module

Go Modules is a package management system for Go that provides versioning and dependency management. By using Go Modules, developers can easily manage their dependencies and ensure that their applications run smoothly on different systems, without the need to set up a GOPATH environment variable.

To create a Go application with Go Modules, you need to initialize a new Go Modules project. You can do this by running the following command in the terminal:

go mod init <module-name>

Step 3: Create a Basic Golang Application

Next, you can create a simple Go application, such as a basic Hello World program. The code for this program is as follows:

package main

import (
    "fmt"
    "net/http"
)

func main() {
    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintln(w, "Hello World!")
    })
    http.ListenAndServe(":8080", nil)
}

Now comes the essential part of Golang hosting.

Step 4: Create a Heroku Account

To deploy your application on Heroku, you need to create a Heroku account. Sign-up for a free account on the official website.

Step 5: Verify Heroku Account

To verify your Heroku account, you can follow these steps:
1. Log in to your Heroku account.
2. Go to the “Account Settings” page.
3. Click on the “Verify Account” button.
4. Follow the steps to provide and verify your personal information, including your full name, address, and phone number.
5. Provide payment information to verify your account, which can be a credit card or PayPal account.

Step 6: Install the Heroku CLI

The Heroku CLI (Command Line Interface) allows you to manage and deploy your applications from the terminal. To install the Heroku CLI, follow the instructions below
1. Go to the Heroku CLI download page: Heroku CLI.
2. Select your operating system (Windows, MacOS, or Linux) and follow the instructions to download and install the Heroku CLI.
3. Open a terminal or command prompt window & type the command:

heroku login

4. Enter your Heroku credentials to log in.

Step 7: Create a New Heroku Application

To create a new Heroku application, run the following command in the terminal:

heroku create

Step 8: Deploy Golang App on Heroku

To use Heroku to deploy Golang application, you need to create a Procfile file that specifies the command to run the application. The contents of the Procfile file should be as follows:

web: go run main.go

Next, add the files in your Go application to a Git repository and push the repository to Heroku using the following commands:

git init
git add .
git commit -m "Initial commit"
git push heroku master

Step 9: Launch the Application

After the application is successfully deployed, you can launch it by running the following command:

heroku open

The Go application was deployed to Heroku and opened using the heroku open command. The application’s output, “Hello, World!” is displayed in a web browser. The URL of the application on Heroku is also shown in the web browser’s address bar.

You are successful with Heroku Golang now. Your Go application is now running on Heroku. You can access it through the URL in the terminal as an output of the above heroku open command.

Conclusion

Heroku is a convenient platform for deploying and running Go applications. Its ease of use and support for multiple programming languages make it a popular choice for many developers. Additionally, Heroku Go combination offers a variety of tools and services that make it simple to manage and scale your applications as they grow. Whether you are a beginner or an experienced developer, Heroku is an excellent option for hosting your Go applications.

We hope you found the tutorial to use Heroku to deploy Golang application. For more such valuable lessons, find our Golang tutorials.

Original article source at: https://www.bacancytechnology.com/

#heroku #deploy #golang #application 

Heroku to Deploy Golang Application

How to Deploy Ruby on Rails Application on Heroku

Introduction

For any developer, the most satisfying thing is to make their development available to each individual after building it as a reference source. So, after locally previewing and developing a Rails application on your system, the following step is to put it online so that others can observe it. This is called deploying the application. Now here comes Heroku.

It allows you to deploy your Ruby on Rails application quickly and is prominent for learners because it’s an open-source and “effortless” push-to-deploy system. Concisely, Heroku handles pretty much everything for the individual. Let us check how you can deploy Ruby on Rails application on Heroku with the following steps.

Steps to Deploy Ruby on Rails Application on Heroku Cloud Platform

To publish your app on the cloud, here are the steps you need to follow. Deploying Ruby on Rails app to Heroku platform as a service is not that tricky. This guide will show you how to begin with your RoR app from local server to deploying it on Heroku.

Local Setup

1. Create a new Heroku account.
2. Install the Heroku CLI on your machine.

$ sudo snap install --classic heroku

3. After installation, the heroku command is now available in your system. Use your Heroku account credentials to log in.

admin1@admin1-Latitude-3510:~$ heroku login
heroku: Press any key to open up the browser to login or q to exit: 

4. Create a new ssh key if not available otherwise, press Enter instantly to upload the existing ssh key used for pushing the code later.

$ heroku login
heroku: Enter your Heroku credentials Email: schneems@example.com
Password:
Could not find an existing public key.
Would you like to generate one? [Yn]
Generating new SSH public key.
Uploading ssh public key /Users/adam/.ssh/id_rsa.pub

Create a Rails Application

Fire the following commands to create a rails application.

rails new app -d postgresql
cd app
bundle a tailwindcss-rails
rails tailwindcss:install

Disclaimer:- Here we are using Ruby2.7.2 and Rails 6.1.7 running on Ubuntu 22.04.1

Working with Ruby is entertaining, but you can’t deploy the application running on SQLite3 on Heroku. PostgreSQL is the practical standard for databases on Heroku. Add the gem ‘pg’ if you’re using your current RoR application.

gem 'sqlite3'

To this:

gem 'pg'

Note:- During development PostgreSQL is the highly approved database to use. Because of differences between your development and deployment environments, an analogy is maintained which helps to prevent devious bugs from being established in the application. Install Postgres locally if it is yet to be available on your system.

In Gemfile, add rails_12factor gem if you use former Rails versions to authorize static asset serving and logging on Heroku.

gem 'rails_12factor', group: :production

During deploying a new application, the rails_12factor gem is not needed. But if you are upgrading an already existing application, you can remove the rails_12factor gem provided you have the proper configuration in your config/environments/production.rb file:

# config/environments/production.rb
config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present?
if ENV["RAILS_LOG_TO_STDOUT"].present?
Logger                       = ActiveSupport::Logger.new(STDOUT)
   Logger.formatter  = config.log_formatter
    config.Logger = ActiveSupport::TaggedLogging.new(Logger)
end

Now reinstall your dependencies (to generate a new Gemfile.lock):

$ bundle install

Amend the database.yml with your data and make sure the config/database.yml is using the postgresql adapter.

production:      
   <<: *default      
   database: app_production

Tоproduction:
    <<: *default
   adapter: postgresql    
    database: app_production

Run the scaffold command to create the Post.

$ rails g scaffold posts title:string content:text

Create and Migrate the database.

$ rails db:create 
$ rails db:migrate

Change the main page route in routes.rb and start the server

root "posts#index"
rails s

Push your code changes to git

git init
git add .
git commit -m "Deploying Rails application

Source Code: deploying-rails-app

You can also clone the code. Here’s the source code of the repository: https://github.com/ishag-bac/Deploy

Prerequisites

As we want to deploy Ruby on Rails application on Heroku, we will need the following.

  • Basic knowledge of Ruby/Rails and Git.
  • An installed version of Ruby 2.5.0+ and Rails 6+ in your local machine.
  • A verified Heroku account.
  • Install git in your system if not already installed because Heroku will depend on it for deployment.
  • If your project is not available in Git, then commit your application changes in git.

Specify Ruby Version

Rails 6 requires Ruby 2.5.0 or above. By default, a recent version of Ruby is installed in Heroku. However, you can specify the exact version in your Gemfile using the ruby domain-specific languages. Depending on the current Ruby version running in the application, it might look like this:

ruby '2.7.2'

The same version of Ruby should be running locally as well. You can check the ruby version by running $ ruby -v.

Deploy Ruby on Rails application on Heroku

After installing Heroku CLI and logging into your Heroku account, ensure you are in the correct directory path containing your application, then follow the instructions below.

Create an application in Heroku using the below command in the terminal.

$ heroku create

Push your code to Heroku on the master branch.

$ git push heroku master

Note:- Check the default branch name before deployment. If it uses master, use git push heroku master. Otherwise ,use git push heroku main.

Migrate the database of your application by running.

$ heroku run rails db:migrate

To seed your database with data, run.

$ heroku run rails db:seed

Get the URL of your application and visit in the browser.

$ heroku apps:info

Visit Your Application

The deployment of the source code to Heroku is done. Now you can instruct to execute a process type to Heroku. Heroku implements this process by running the associated command in a dyno. [Dyno is the basic unit of composition, a container on the Heroku server.]

Ensure that you have a dyno running the web process type with the command:

$ heroku ps:scale web=1

You can check the state of the app’s dynos. All the running dynos of your application can be listed out by applying the heroku ps command.

Using heroku open, we can open the deploying application.

View Logs

If the application is not functioning correctly or you run into any problems, you must check the logs. With the help of heroku logging commands, you can get information about your application.

By running the command with the –tail flag option like this, you can also get the entire stream of logs:

$ heroku logs --tail

Troubleshooting

Check your logs if you push up your application, and it crashes (heroku ps shows state crashed) to find out what exactly went wrong while pushing up the application. Here are some issues.

Runtime Dependencies on Development Gems and Test Gems

Check your Bundler groups if you’re missing a gem while deploying. Heroku builds your application without the development or test groups, and if your app depends on a gem from one of these groups to run, you should move it out of the group. Therefore before deploying Ruby on Rails app to Heroku, test if it works locally, then push it to Heroku.

Conclusion

We hope you found our comprehensive guide useful and would try to deploy ruby on rails application on Heroku. In case of any queries, feel free to reach out to us. We will be glad to assist you in your technical endeavors. You can find relevant Ruby on Rails tutorials if that interests you. Do share this blog on social media and with your friends, and comment if you have any suggestions. Happy to help!

Original article source at: https://www.bacancytechnology.com/

#ruby #rails #application #heroku 

How to Deploy Ruby on Rails Application on Heroku
Bongani  Ngema

Bongani Ngema

1669094357

Best Tutorials on The Web to Migrate From Heroku To AWS

In this tutorial, I tackled two major goals:

  1. give my personal apps a more professional UX
  2. reduce my overall hosting cost by 50%

I have been using the free tier of Heroku to serve up demo apps and create tutorial sandboxes. It's a great service, easy to use and free, but it does come with a lengthy lag time on initial page load (about 7 seconds). Thats a looooong time by anyone's standards. With a 7 second load time, according to akamai.com and kissmetrics, more than 25% of users will abandon your page well before your first div even shows up. Rather than simply upgrading to the paid tier of Heroku, I wanted to explore my options and learn some useful skills in the process.

What's more, I also have a hosted blog on Ghost. It's an excellent platform, but it's a bit pricey. Fortunately, they offer their software open source and provide a great tutorial on getting it up and running with Node and MySQL. You simply need somewhere to host it.

By parting ways with my hosted blog and serving up several resources from one server, I can provide a better UX for my personal apps and save a few bucks at the same time. This post organizes some of the best tutorials on the web to get this done quickly and securely.

This requires several different technologies working together to accomplish the goal:

TechPurpose
EC2provide cheap, reliable cloud computing power
Ubuntuthe operating system that handles running our programs
Dockeran isolation layer to provide a consistent execution environment
Nginxhandle requests in a robust and secure way
Certbotserve up SSL/HTTPS secured web applications, and in turn, increase SSO (search engine optimization)
Ghostprovide a simple blog with GUI and persistance
Reactallow for fast, composable web applications

Objectives

  • Host personal projects, portfolio site, blog -> cheaply and without loading lag time
  • Get acquainted with Nginx
  • Serve HTTPS encrypted sites
  • Dockerize React

Technologies Used

  • Amazon EC2
  • Ubuntu
  • Nginx
  • React
  • Let's Encrypt and Certbot (for SSL)
  • Docker
  • Ghost Blog Platform

Takeaways

After completing this tutorial, you will be able to:

  • Set up an EC2 instance
  • Set up Nginx
  • Configure your DNS with sub-domains
  • Set up the Ghost blog platform on an EC2 instance
  • Dockerize a static React app
  • Serve a static site
  • Configure SSL with Let's Encrypt and Certbot

The Finances

Current Hosted Solutions (No Lag Time)

ResourceServicePrice / MonthInfo
BlogGhost Pro$19https://ghost.org/pricing
Personal AppsHeroku Hobby$7/apphttps://www.heroku.com/pricing

 

Self Hosted Options

ResourceServicePrice / MonthInfo
Blog and AppsAWS EC2 T2 Micro (1GB Memory)~$10https://aws.amazon.com/ec2/pricing/on-demand
Blog and AppsLinode (1GB Memory)$5https://www.linode.com/pricing/
Blog and AppsDigital Ocean (1GB Memory)$5https://www.digitalocean.com/pricing

So with a hosted solution, for one blog and one app, I would be paying $26 per month and that would go up $7/month with each new app. Per year, thats $312 + $84 per additional app. With a little bit of leg work outlined in this post, I am hosting multiple apps and a blog for less than $10/month.

I decided to go with the AWS solution. While it is more expensive, it is a super popular enterprise technology that I want to become more familiar with.

Thanks

A BIG THANKS to all the folks who authored any of the referenced material. Much of this post consists of links and snippets of resources that proved to work well and includes the slight modifications needed along the way to suite my needs.

Thank you, as well, for reading. Let's get to it!

EC2 setup

Here is how to create a new EC2 instance.

Resource: https://www.nginx.com/blog/setting-up-nginx

All you really need is the above tutorial to be on your way with setting up an EC2 instance and installing Nginx. I stopped after the EC2 creation since Nginx gets installed during the Ghost blog platform setup.

Elastic IP

Resource: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

Further down the road, you are going to point your DNS (domain name system) at your EC2 instance's public IP address. That means you don't want it to change for any reason (for example, stopping and starting the instance). There are two ways to accomplish this:

  1. activate the default VPC (virtual private cloud) in the AWS account
  2. assign an Elastic IP address

Both options provide a free static IP address. In this tutorial, I went with the Elastic IP to accomplish this goal as it was really straightforward to add to my server after having already set it up.

Follow the steps in the above resource to create an elastic IP address and associate it with your EC2 instance.

SSH Key

Resource: https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-16-04

I followed this tutorial to the 'T'...worked like a charm. You'll set up your own super user with its own SSH key and create a firewall restricting incoming traffic to only allow SSH.

In a minute you'll open up both HTTP and HTTPS for requests.

DNS Configuration

I use Name.com for my DNS hosting because they have a decent UI and are local to Denver (where I reside). I already own petej.org and have been pointing it to a github pages hosted static site. I decided to set up a sub-domain for the blog -- blog.petej.org -- using A records to point to my EC2 instance's public IP address. I created two A records, one to handle the www prefix and another to handle the bare URL:

a-record

Now via the command line, use the dig utility to check to see if the new A record is working. This can be done from your local machine or the EC2 instance:

$ dig A blog.petej.org

; <<>> DiG 9.9.7-P3 <<>> A blog.petej.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44050
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;blog.petej.org.            IN  A

;; ANSWER SECTION:
blog.petej.org.     300 IN  A   35.153.44.46

;; Query time: 76 msec
;; SERVER: 75.75.75.75#53(75.75.75.75)
;; WHEN: Sat Jan 27 10:13:50 MST 2018
;; MSG SIZE  rcvd: 59

Note: The A records take effect nearly instantaneously, but can take up to an hour to resolve any caching from a previous use of this URL. So if you already had your domain name set up and working, this may take a little while.

Nice: domain --> √. Now you need to get your EC2 instance serving up some content!

Ghost Blog Platform

Resource: https://docs.ghost.org/install/ubuntu/

Another great tutorial. I followed it every step of the way and it was golden. There are some steps that we have already covered above, such as the best practices of setting up an Ubuntu instance, so you can skip those. Be sure to start from the Update Packages section (under Server Setup).

Note: Follow this setup exactly in order. My first time around I neglected to set a user for the MySQL database and ended up having to remove Ghost from the machine, reinstall, and start from the beginning.

After stepping through the Ghost install process, you should now have a blog up and running at your domain name - check it out in the browser!

Midway recap

What have you accomplished?

  • Ubuntu server up and running
  • SSH access into our server
  • Ghost platform installed
  • Nginx handling incoming traffic
  • Self hosted blog, up!

So whats next?

You are now going to:

  1. Install git and set up SSH access to your GitHub account
  2. Dockerize a static React app
  3. Set up Docker on the EC2 instance
  4. Configure the Nginx reverse proxy layer to route traffic to your React app
  5. Associate SSL certificates with your blog and react app so they can be served over HTTPS

Onward...

Gotta have git

Install git on the EC2 instance:

$ sudo apt-get install git

Create a new SSH key specifically for GitHub access: https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent

Because you set up a user for the Ubuntu server earlier, the /root directory and your ~ directory (user's home directory) are different. To account for that, on the ssh-add step do this instead:

cp /root/.ssh/id_rsa ~/.ssh/id_rsa
cd ~/.ssh
ssh-add
$ sudo cat ~/.ssh/id_rsa

Copy the output and add it to GitHub as a new SSH key as detailed in the below link.

Start with step 2 --> https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account

You are all set up to git. Clone and then push a commit to a repo to make sure everything is wired up correctly.

Static React App

Resource: https://medium.com/ai2-blog/dockerizing-a-react-application-3563688a2378

Once you have your React app running locally with Docker, push the image up to Docker Hub:

You will need a Docker Hub account --> https://hub.docker.com

$ docker login
Username:
Password:
$ docker tag <image-name> <username>/<image-name>:<tag-name>
$ docker push <username>/<image-name>

This will take a while. About 5 min. Coffee break...

And we're back. Go ahead and log in to GitHub and make sure that your image has been uploaded.

Now back to your EC2 instance. SSH into it.

Install docker:

$ sudo apt install docker.io

Pull down the Docker image locally that you recently pushed up:

$ sudo docker pull <username>/<image-name>

Get the image id and use it to fire up the app:

$ sudo docker images
# Copy the image ID
$ sudo docker run -d -it -p 5000:5000 <image-id>

Now that you have the React app running, let's expose it to the world by setting up the Nginx config.

Nginx setup for React app

Resource: https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-16-04

Note: Instead of using /etc/nginx/sites-available/default like the tutorial suggests, I made one specific for the URL (better practice and more flexible going forward) --> circle-grid.petej.org.conf file, leaving the default file completely alone.

We also need to set up a symlink:

$ sudo ln -s /etc/nginx/sites-available/circle-grid.petej.org.conf /etc/nginx/sites-enabled/

Note: Why the symlink? As you can see if you take a look in /etc/nginx/nginx.conf, only the files in the /sites-enabled are being taken into account. The symlink will take care of this for us by representing this file in the sites_available file making it discoverable by Nginx. If you've used Apache before you will be familiar with this pattern. You can also remove symlinks just like you would remove a file: rm ./path/to/symlink.

More about 'symlinks': http://manpages.ubuntu.com/manpages/xenial/en/man7/symlink.7.html

Let's Encrypt with Certbot

Resource: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04

Now to be sure that Certbot configured a cron job to auto renew your certificates run this command:

$ ls /etc/cron.d/

If there is a certbot file in there, you are good go.

If not, follow these steps:

Test the renewal process manually:

$ sudo certbot renew --dry-run

If that is successful, then:

$ nano /etc/cron.d/certbot

Add this line to the file:

0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew

Save it, all done.

You have now configured a task to run every 12 hours that will upgrade any certs that are within 30 days of expiration.

Conclusion

You should now be able to:

  • Set up an EC2 instance
  • Set up Nginx
  • Configure your DNS with sub-domains
  • Set up a Ghost blog platform
  • Dockerize a React app
  • Serve a static React app
  • Configure SSL --> Let's Encrypt and Certbot

I hope this was a helpful collection of links and tutorials to get you off the ground with a personal app server. Feel free to contact me (pete dot topleft at gmail dot com) with any questions or comments.

Thanks for reading.

Original article source at: https://testdriven.io/

#heroku #aws 

Best Tutorials on The Web to Migrate From Heroku To AWS

How to Production Django Deployments on Heroku

We use Heroku to host the TestDriven.io learning platform so that we can focus on application development rather than configuring web servers, installing Linux packages, setting up load balancers, and everything else that goes along with infrastructure management on a traditional server.

This article aims to simplify the process of deploying, maintaining, and scaling a production-grade Django app on Heroku.

We'll also review some tips and tricks for simplifying the deployment process. At the end, you'll find a production checklist for deploying a new app to production.

Heroku

Why Heroku? Like Django, Heroku embraces the "batteries included" philosophy. It's an opinionated environment, but it's also an environment that you don't have to manage -- so you can focus on application development rather than the environment supporting it.

If you use your own infrastructure or an Infrastructure as a Service (IaaS) solution -- like DigitalOcean Droplets, Amazon EC2, Google Compute Engine, to name a few -- you must either hire a sys admin/devops engineer or take on that role yourself. The former costs money while the latter slows down your velocity. Heroku will probably end up costing you more in hosting than an IaaS solution, but you will save money since you don't need to hire someone to administer the infrastructure and you can move faster on the application, which is what matters most at the end of the day.

Tips:

  1. Make sure you're using Heroku with the latest Heroku Stack.
  2. Use either uWSGI or Gunicorn as your production WSGI server. Either is fine. If you don't know why you'd prefer one WSGI server over another, it doesn't really matter. It's not difficult to switch later on either.
  3. Run long-running or CPU-intensive processes, like email delivery or PDF report generation, outside of the web application asynchronously with either Celery or RQ along with the Heroku Redis add-on. For reference, we use Django-RQ.
  4. Run at least two web and background processes for redundancy.
  5. Use SSL.
  6. Follow the Twelve-Factor App methodology.
  7. Add caching.

Database

Tips:

  1. Use a Heroku standard (or higher) tier Postgres database. Review the disc space, memory, and concurrent connection limits for each tier as well as the Concurrency and Database Connections in Django article.
  2. Schedule daily backups of the production database via Heroku PGBackups.
  3. Keep your migrations clean and manageable by squashing or resetting them from time to time.

Continuous Integration and Delivery

The Heroku runtime is both stateless and immutable, which helps enable continuous delivery. On each application deploy, a new virtual machine is constructed, configured, and moved into production.

Because of this, you do not need to worry about:

  1. Using a process manager to stand up your services as Heroku handles this for you via a Dyno Manager.
  2. Configuring a deployment mechanism for updating and restarting the app.

Heroku works with a number of Continuous Integration (CI) services, like Circle and Travis, and they also offer their own CI solution -- Heroku CI.

Tips:

  1. Set up automatic deployments. Manual deployments are error prone due to human error.
  2. Run the Django deployment checklist (manage.py check --deploy) in your production CI build.
  3. At TestDriven.io, we use a form of GitOps where the state of the app is always kept in git and changes to the staging and production environments only happen in CI. Consider using this approach to help speed up development and introduce a stable rollback system.
  4. Deploy regularly, at a scheduled time when developers are available in case something goes wrong.
  5. Use release tags so you know exactly which version of the code is running in production -- i.e., git tag -a "$ENVIRONMENT/${VERSION}".

Static and Media Files

Tips:

  1. Use WhiteNoise for static files and then throw a CDN, like Cloudflare or CloudFront, in front of it.
  2. For user-uploaded media files, use S3 and django-storages.

Environments

Tips:

For staging, use a different Heroku app. Make sure to turn maintenance mode on when it's not in use so that Google's crawlers don't inadvertently come across it.

Testing

Write tests. Tests are a safeguard, so you don't accidentally change the functionality of your application. It's much better to catch a bug locally from your test suite than by a customer in production.

Tips:

Ignore the traditional testing pyramid. Spend half your time writing Django unit tests (with both pytest and Hypothesis). Spend the other half writing browser-based integration and end-to-end tests with Cypress. Compared to Selenium, Cypress tests are much easier to write and maintain. We recommend incorporating Cypress into your everyday TDD workflow. Review Modern Front-End Testing with Cypress for more info on this.

Monitoring and Logging

Monitoring and logging are a crucial part of a your app's reliability, making it easier to:

  1. Discover errors at an early stage.
  2. Understand how your app works.
  3. Analyze performance.
  4. Determine if your app is running correctly.

Your logs should always have a timestamp and a log level. They should also be human readable and easy to parse.

On the monitoring side of things, set up alerts to help reduce and preempt downtimes. Set up notifications so you can fix issues and address bottlenecks before your customers start to complain.

As you have seen, Heroku provides a number of services via the add-on system. This system is one of the powerful tools that you get out of the box from Heroku. You have hundreds of services at your disposable that take minutes to configure, many of which are useful for logging, monitoring, and error tracking.

Tips:

  1. Heroku retains only the most recent 1500 lines of consolidated logs, which will just be a couple of seconds of logs. So, you'll need to send logs to a remote logging service, like Logentries, to aggregate all of your logs.
  2. Use Scout for application performance monitoring in order to track down performance issues.
  3. Use Sentry for exception monitoring to get notifications when errors occur in your application.
  4. You can monitor the basics like memory usage and CPU load directly from Heroku's Application Metrics dashboard.
  5. Use Uptime Robot, which does not have a Heroku add-on, to ensure your site is up.

Security

When it comes to security, people are generally the weakest link. Your development team should be aware of some of the more common security vulnerabilities. Security Training for Engineers and Heroku's Security guide are great places to start along with the following OWASP cheat sheets:

  1. Cross-Site Request Forgery (CSRF) Prevention
  2. XSS (Cross Site Scripting) Prevention
  3. DOM based XSS Prevention
  4. Content Security Policy

Tips:

  1. Use Snyk to keep your dependencies up-to-date.
  2. Introduce a throttling mechanism, like Django Ratelimit, to limit the impact of DDoS attacks.
  3. Keep your application's configuration separate from your code to prevent sensitive credentials from getting checked into source control.
  4. Monitor and log suspicious behavior, such as multiple failed login attempts from a particular source and unusual spikes in traffic. Check out the Expedited WAF add-on for real-time security monitoring.
  5. Check your Python code for common security issues with Bandit.
  6. Once deployed, run your site through the automated security checkup at Pony Checkup.
  7. Validate upload file content type and size.

Conclusion

Hopefully this article provided some useful information that will help simplify the process of deploying and maintaining a production Django app on Heroku.

Remember: Web development is complex because of all the moving pieces. You can counter that by:

  1. Breaking problems into small, easily-digestible subproblems. Ideally, you can then translate these subproblems to problems that have already been solved.
  2. Removing pieces altogether by using Django and Heroku -- both of which make it easier to develop and deploy secure, scalable, and maintainable web apps since they embrace stability and a "batteries included" philosophy.

Curious about what the full architecture looks like with Heroku?

django architecture

Once you have Celery and Gunicorn configured, you can focus the majority, if not all, of your time on developing your application -- everything else is an add-on.

Recommended resources:

  1. How to Deploy Software
  2. Thoughts on Web Application Deployment

Production Checklist

Deploying a new Django app to Heroku? Review the following checklist for help. Make sure you document the deployment workflow throughout the entire process.

Before deployment

Frontend:

  1. Spell check Django templates.
  2. Set favicon.
  3. Customize the default error views.
  4. Add a robots.txt file.
  5. Create a sitemap.xml file.
  6. Compress and optimize all images.
  7. Set up Google Analytics.
  8. Configure SSL.
  9. Configure a CDN provider, like Cloudflare, for frontend assets.

Django:

  1. Anonymize the production Django admin URL.
  2. Optionally add the django-cors-headers app to add Cross-Origin Resource Sharing (CORS) headers to responses.
  3. Consider using ATOMIC_REQUESTS.
  4. Configure the following Django settings for production:

Continuous Integration:

  1. Set up a CI service.
  2. Run python manage.py check --deploy against the production settings.
  3. Configure any other linters and/or code analysis tools to run.
  4. Test the CI process.
  5. Configure automated deployments.

Heroku:

  1. Ensure the latest Heroku stack and Python version are being used.
  2. Configure Postgres and Redis add-ons.
  3. Set up database backups.
  4. Configure remaining Heroku add-ons -- i.e., Logentries, Scout, Sentry, and SendGrid.
  5. Set environment variables.
  6. Set up at least two web and worker processes for redundancy.

After deployment

Frontend:

  1. Run the Mozilla Observatory, Google PageSpeed, Google Mobile-Friendly, webhint, and Netsparker Security Headers scans.
  2. Use the WAVE tool to test if your page meets the accessibility standards.
  3. Review the Front-End Checklist.
  4. Run a SSL Server Test.
  5. Run automated tests if you have them or manually test your app from the browser.
  6. Verify 301 redirects are configured properly.
  7. Set up Google Tag Manager.
  8. Configure Uptime Robot.

Cheers!

Original article source at: https://testdriven.io/

#django #heroku 

How to Production Django Deployments on Heroku
Bongani  Ngema

Bongani Ngema

1668865860

Deploying & Hosting A Machine Learning Model with FastAPI and Heroku

Assume that you're a data scientist. Following a typical machine learning workflow, you'll define the problem statement along with objectives based on business needs. You'll then start finding and cleaning data followed by analyzing the collected data and building and training your model. Once trained, you'll evaluate the results. This process of finding and cleansing data, training the model, and evaluating the results will continue until you're satisfied with the results. You'll then refactor the code and package it up in a module, along with its dependencies, in preparation for testing and deployment.

What happens next? Do you hand the model off to another team to test and deploy the model? Or do you have to handle this yourself? Either way, it's important to understand what happens when a model gets deployed. You may have to deploy the model yourself one day. Or maybe you have a side project that you'd just like to stand up in production and make available to end users.

In this tutorial, we'll look at how to deploy a machine learning model, for predicting stock prices, into production on Heroku as a RESTful API using FastAPI.

Objectives

By the end of this post you should be able to:

  1. Develop a RESTful API with Python and FastAPI
  2. Build a basic machine learning model to predict stock prices
  3. Deploy a FastAPI app to Heroku
  4. Use the Heroku Container Registry for deploying Docker to Heroku

FastAPI

FastAPI is a modern, high-performance, batteries-included Python web framework that's perfect for building RESTful APIs. It can handle both synchronous and asynchronous requests and has built-in support for data validation, JSON serialization, authentication and authorization, and OpenAPI.

Highlights:

  1. Heavily inspired by Flask, it has a lightweight microframework feel with support for Flask-like route decorators.
  2. It takes advantage of Python type hints for parameter declaration which enables data validation (via pydantic) and OpenAPI/Swagger documentation.
  3. Built on top of Starlette, it supports the development of asynchronous APIs.
  4. It's fast. Since async is much more efficient than the traditional synchronous threading model, it can compete with Node and Go with regards to performance.

Review the Features guide from the official docs for more info. It's also encouraged to review Alternatives, Inspiration, and Comparisons, which details how FastAPI compares to other web frameworks and technologies, for context.

Project Setup

Create a project folder called "fastapi-ml":

$ mkdir fastapi-ml
$ cd fastapi-ml

Then, create and activate a new virtual environment:

$ python3.8 -m venv env
$ source env/bin/activate
(env)$

Add a two new files: requirements.txt and main.py.

Unlike Django or Flask, FastAPI does not have a built-in development server. So, we'll use Uvicorn, an ASGI server, to serve up FastAPI.

New to ASGI? Read through the excellent Introduction to ASGI: Emergence of an Async Python Web Ecosystem blog post.

Add FastAPI and Uvicorn to the requirements file:

fastapi==0.68.0
uvicorn==0.14.0

Install the dependencies:

(env)$ pip install -r requirements.txt

Then, within main.py, create a new instance of FastAPI and set up a quick test route:

from fastapi import FastAPI


app = FastAPI()


@app.get("/ping")
def pong():
    return {"ping": "pong!"}

Start the app:

(env)$ uvicorn main:app --reload --workers 1 --host 0.0.0.0 --port 8008

So, we defined the following settings for Uvicorn:

  1. --reload enables auto-reload so the server will restart after changes are made to the code base.
  2. --workers 1 provides a single worker process.
  3. --host 0.0.0.0 defines the address to host the server on.
  4. --port 8008 defines the port to host the server on.

main:app tells Uvicorn where it can find the FastAPI ASGI application -- e.g., "within the the 'main.py' file, you'll find the ASGI app, app = FastAPI().

Navigate to http://localhost:8008/ping. You should see:

{
  "ping": "pong!"
}

ML Model

The model that we'll deploy uses Prophet to predict stock market prices.

Add the following functions to train the model and generate a prediction to a new file called model.py:

import datetime
from pathlib import Path

import joblib
import pandas as pd
import yfinance as yf
from fbprophet import Prophet

BASE_DIR = Path(__file__).resolve(strict=True).parent
TODAY = datetime.date.today()


def train(ticker="MSFT"):
    # data = yf.download("^GSPC", "2008-01-01", TODAY.strftime("%Y-%m-%d"))
    data = yf.download(ticker, "2020-01-01", TODAY.strftime("%Y-%m-%d"))
    data.head()
    data["Adj Close"].plot(title=f"{ticker} Stock Adjusted Closing Price")

    df_forecast = data.copy()
    df_forecast.reset_index(inplace=True)
    df_forecast["ds"] = df_forecast["Date"]
    df_forecast["y"] = df_forecast["Adj Close"]
    df_forecast = df_forecast[["ds", "y"]]
    df_forecast

    model = Prophet()
    model.fit(df_forecast)

    joblib.dump(model, Path(BASE_DIR).joinpath(f"{ticker}.joblib"))


def predict(ticker="MSFT", days=7):
    model_file = Path(BASE_DIR).joinpath(f"{ticker}.joblib")
    if not model_file.exists():
        return False

    model = joblib.load(model_file)

    future = TODAY + datetime.timedelta(days=days)

    dates = pd.date_range(start="2020-01-01", end=future.strftime("%m/%d/%Y"),)
    df = pd.DataFrame({"ds": dates})

    forecast = model.predict(df)

    model.plot(forecast).savefig(f"{ticker}_plot.png")
    model.plot_components(forecast).savefig(f"{ticker}_plot_components.png")

    return forecast.tail(days).to_dict("records")


def convert(prediction_list):
    output = {}
    for data in prediction_list:
        date = data["ds"].strftime("%m/%d/%Y")
        output[date] = data["trend"]
    return output

Here, we defined three functions:

  1. train downloads historical stock data with yfinance, creates a new Prophet model, fits the model to the stock data, and then serializes and saves the model as a Joblib file.
  2. predict loads and deserializes the saved model, generates a new forecast, creates images of the forecast plot and forecast components, and returns the days included in the forecast as a list of dicts.
  3. convert takes the list of dicts from predict and outputs a dict of dates and forecasted values (i.e., {"07/02/2020": 200}).

This model was developed by Andrew Clark.

Update the requirements file:

# pystan must be installed before prophet
# you may need to pip install it on it's own
# before installing the remaining requirements
# pip install pystan==2.19.1.1

pystan==2.19.1.1

fastapi==0.68.0
uvicorn==0.14.0

fbprophet==0.7.1
joblib==1.0.1
pandas==1.3.1
plotly==5.1.0
yfinance==0.1.63

Install the new dependencies:

(env)$ pip install -r requirements.txt

If you have problems installing the dependencies on your machine, you may want to use Docker instead. For instructions on how to run the application with Docker, review the README on the fastapi-ml repo on GitHub.

To test, open a new Python shell and run the following commands:

(env)$ python

>>> from model import train, predict, convert
>>> train()
>>> prediction_list = predict()
>>> convert(prediction_list)

You should see something similar to:

{
    '08/12/2021': 282.99012951691776,
    '08/13/2021': 283.31354121099446,
    '08/14/2021': 283.63695290507127,
    '08/15/2021': 283.960364599148,
    '08/16/2021': 284.2837762932248,
    '08/17/2021': 284.6071879873016,
    '08/18/2021': 284.93059968137834
}

This is the predicted prices for the next seven days for Microsoft Corporation (MSFT). Take note of the saved MSFT.joblib model along with the two images:

plot

components

Go ahead and train a few more models to work with. For example:

>>> train("GOOG")
>>> train("AAPL")
>>> train("^GSPC")

Exit the shell.

With that, let's wire up our API.

Routes

Add a /predict endpoint by updating main.py like so:

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel

from model import convert, predict

app = FastAPI()


# pydantic models


class StockIn(BaseModel):
    ticker: str


class StockOut(StockIn):
    forecast: dict


# routes


@app.get("/ping")
async def pong():
    return {"ping": "pong!"}


@app.post("/predict", response_model=StockOut, status_code=200)
def get_prediction(payload: StockIn):
    ticker = payload.ticker

    prediction_list = predict(ticker)

    if not prediction_list:
        raise HTTPException(status_code=400, detail="Model not found.")

    response_object = {"ticker": ticker, "forecast": convert(prediction_list)}
    return response_object

So, in the new get_prediction view function, we passed in a ticker to our model's predict function and then used the convert function to create the output for the response object. We also took advantage of a pydantic schema to covert the JSON payload to a StockIn object schema. This provides automatic type validation. The response object uses the StockOut schema object to convert the Python dict -- {"ticker": ticker, "forecast": convert(prediction_list)} -- to JSON, which, again, is validated.

For the web app, let's just output the forecast in JSON. Comment out the following lines in predict:

# model.plot(forecast).savefig(f"{ticker}_plot.png")
# model.plot_components(forecast).savefig(f"{ticker}_plot_components.png")

Full function:

def predict(ticker="MSFT", days=7):
    model_file = Path(BASE_DIR).joinpath(f"{ticker}.joblib")
    if not model_file.exists():
        return False

    model = joblib.load(model_file)

    future = TODAY + datetime.timedelta(days=days)

    dates = pd.date_range(start="2020-01-01", end=future.strftime("%m/%d/%Y"),)
    df = pd.DataFrame({"ds": dates})

    forecast = model.predict(df)

    # model.plot(forecast).savefig(f"{ticker}_plot.png")
    # model.plot_components(forecast).savefig(f"{ticker}_plot_components.png")

    return forecast.tail(days).to_dict("records")

Run the app:

(env)$ uvicorn main:app --reload --workers 1 --host 0.0.0.0 --port 8008

Then, in a new terminal window, use curl to test the endpoint:

$ curl \
  --header "Content-Type: application/json" \
  --request POST \
  --data '{"ticker":"MSFT"}' \
  http://localhost:8008/predict

You should see something like:

{
  "ticker":"MSFT",
  "forecast":{
    "08/12/2021": 282.99012951691776,
    "08/13/2021": 283.31354121099446,
    "08/14/2021": 283.63695290507127,
    "08/15/2021": 283.960364599148,
    "08/16/2021": 284.2837762932248,
    "08/17/2021": 284.6071879873016,
    "08/18/2021": 284.93059968137834
  }
}

What happens if the ticker model doesn't exist?

$ curl \
  --header "Content-Type: application/json" \
  --request POST \
  --data '{"ticker":"NONE"}' \
  http://localhost:8008/predict

{
  "detail": "Model not found."
}

Heroku Deployment

Heroku is a Platform as a Service (PaaS) that provides hosting for web applications. They offer abstracted environments where you don't have to manage the underlying infrastructure, making it easy to manage, deploy, and scale web applications. With just a few clicks you can have your app up and running, ready to receive traffic.

Sign up for a Heroku account (if you don’t already have one), and then install the Heroku CLI (if you haven't already done so).

Next, log in to your Heroku account via the CLI:

$ heroku login

You'll be prompted to press any key to open your web browser to complete login.

Create a new app on Heroku:

$ heroku create

You should see something similar to:

Creating app... done, ⬢ tranquil-cliffs-74287
https://tranquil-cliffs-74287.herokuapp.com/ | https://git.heroku.com/tranquil-cliffs-74287.git

Next, we'll use Heroku's Container Registry to deploy the application with Docker. Put simply, with the Container Registry, you can deploy pre-built Docker images to Heroku.

Why Docker? We want to minimize the differences between the production and development environments. This is especially important with this project, since it relies on a number of data science dependencies that have very specific system requirements.

Log in to the Heroku Container Registry, to indicate to Heroku that we want to use the Container Runtime:

$ heroku container:login

Add a Dockerfile file to the project root:

FROM python:3.8

WORKDIR /app

RUN apt-get -y update  && apt-get install -y \
  python3-dev \
  apt-utils \
  python-dev \
  build-essential \
&& rm -rf /var/lib/apt/lists/*

RUN pip install --upgrade setuptools
RUN pip install \
    cython==0.29.24 \
    numpy==1.21.1 \
    pandas==1.3.1 \
    pystan==2.19.1.1

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

CMD gunicorn -w 3 -k uvicorn.workers.UvicornWorker main:app --bind 0.0.0.0:$PORT

Here, after pulling the Python 3.8 base image, we installed the appropriate dependencies, copied over the app, and ran Gunicorn, a production-grade WSGI application server, to manage Uvicorn with three worker processes. This config takes advantage of both concurrency (via Uvicorn) and parallelism (via Gunicorn workers).

Add Gunicorn to the requirements.txt file:

# pystan must be installed before prophet
# you may need to pip install it on it's own
# before installing the remaining requirements
# pip install pystan==2.19.1.1

pystan==2.19.1.1

fastapi==0.68.0
gunicorn==20.1.0
uvicorn==0.14.0

fbprophet==0.7.1
joblib==1.0.1
pandas==1.3.1
plotly==5.1.0
yfinance==0.1.63

Add a .dockerignore file as well:

__pycache__
env

Build the Docker image and tag it with the following format:

registry.heroku.com/<app>/<process-type>

Make sure to replace <app> with the name of the Heroku app that you just created and <process-type> with web since this will be for a web process.

For example:

$ docker build -t registry.heroku.com/tranquil-cliffs-74287/web .

It will take several minutes to install fbprophet. Be patient. You should see it hang here for some time:

Running setup.py install for fbprophet: started

Once done, you can run the image like so:

$ docker run --name fastapi-ml -e PORT=8008 -p 8008:8008 -d registry.heroku.com/tranquil-cliffs-74287/web:latest

Ensure http://localhost:8008/ping works as expected. Once done, stop and remove the container:

$ docker stop fastapi-ml
$ docker rm fastapi-ml

Push the image to the registry:

$ docker push registry.heroku.com/tranquil-cliffs-74287/web

Release the image:

$ heroku container:release -a tranquil-cliffs-74287 web

This will run the container. You should now be able to view your app. Make sure to test the /predict endpoint:

$ curl \
  --header "Content-Type: application/json" \
  --request POST \
  --data '{"ticker":"MSFT"}' \
  https://<YOUR_HEROKU_APP_NAME>.herokuapp.com/predict

Finally, check out the interactive API documentation that FastAPI automatically generates at https://<YOUR_HEROKU_APP_NAME>.herokuapp.com/docs:

swagger ui

Conclusion

This tutorial looked at how to deploy a machine learning model, for predicting stock prices, into production on Heroku as a RESTful API using FastAPI.

What's next?

  1. Set up a database to save prediction results
  2. Create a production Dockerfile that uses multistage Docker builds to reduce the size of the production image
  3. Add logging and monitoring
  4. Convert your view functions and the model prediction function into asynchronous functions
  5. Run the prediction as a background task to prevent blocking
  6. Add tests
  7. Store trained models to AWS S3, outside of Heroku's ephemeral filesystem

Check out the following resources for help with the above pieces:

  1. Developing and Testing an Asynchronous API with FastAPI and Pytest
  2. Test-Driven Development with FastAPI and Docker

If you're deploying a non-trivial model, I recommend adding model versioning and support for counterfactual analysis along with model monitoring (model and feature drift, bias detection). Check out the Monitaur platform for help in these areas.

You can find the code in the fastapi-ml repo.

Original article source at: https://testdriven.io/

#fastapi #heroku #machinelearning 

Deploying & Hosting A Machine Learning Model with FastAPI and Heroku

How to Deploying Django to Heroku With Docker

This article looks at how to deploy a Django app to Heroku with Docker via the Heroku Container Runtime.

Objectives

By the end of this tutorial, you will be able to:

  1. Explain why you may want to use Heroku's Container Runtime to run an app
  2. Dockerize a Django app
  3. Deploy and run a Django app in a Docker container on Heroku
  4. Configure GitLab CI to deploy Docker images to Heroku
  5. Manage static assets with WhiteNoise
  6. Configure Postgres to run on Heroku
  7. Create a production Dockerfile that uses multistage Docker builds
  8. Use the Heroku Container Registry and Build Manifest for deploying Docker to Heroku

Heroku Container Runtime

Along with the traditional Git plus slug compiler deployments (git push heroku master), Heroku also supports Docker-based deployments, with the Heroku Container Runtime.

A container runtime is program that manages and runs containers. If you'd like to dive deeper into container runtimes, check out A history of low-level Linux container runtimes.

Docker-based Deployments

Docker-based deployments have many advantages over the traditional approach:

  1. No slug limits: Heroku allows a maximum slug size of 500MB for the traditional Git-based deployments. Docker-based deployments, on the other hand, do not have this limit.
  2. Full control over the OS: Rather than being constrained by the packages installed by the Heroku buildpacks, you have full control over the operating system and can install any package you'd like with Docker.
  3. Stronger dev/prod parity: Docker-based builds have stronger parity between development and production since the underlying environments are the same.
  4. Less vendor lock-in: Finally, Docker makes it much easier to switch to a different cloud hosting provider such as AWS or GCP.

In general, Docker-based deployments give you greater flexibility and control over the deployment environment. You can deploy the apps you want within the environment that you want. That said, you're now responsible for security updates. With the traditional Git-based deployments, Heroku is responsible for this. They apply relevant security updates to their Stacks and migrate your app to the new Stacks as necessary. Keep this in mind.

There are currently two ways to deploy apps with Docker to Heroku:

  1. Container Registry: deploy pre-built Docker images to Heroku
  2. Build Manifest: given a Dockerfile, Heroku builds and deploys the Docker image

The major difference between these two is that with the latter approach -- e.g., via the Build Manifest -- you have access to the Pipelines, Review, and Release features. So, if you're converting an app from a Git-based deployment to Docker and are using any of those features then you should use the Build Manifest approach.

Rest assured, we'll look at both approaches in this article.

In either case you will still have access to the Heroku CLI, all of the powerful addons, and the dashboard. All of these features work with the Container Runtime, in other words.

Deployment TypeDeployment MechanismSecurity Updates (who handles)Access to Pipelines, Review, ReleaseAccess to CLI, Addons, and DashboardSlug size limits
Git + Slug CompilerGit PushHerokuYesYesYes
Docker + Container RuntimeDocker PushYouNoYesNo
Docker + Build ManifestGit PushYouYesYesNo

Keep in mind Docker-based deployments are limited to the same constraints that Git-based deployments are. For example, persistent volumes are not supported since the file system is ephemeral and web processes only support HTTP(S) requests. For more on this, review Dockerfile commands and runtime.

Docker vs Heroku Concepts

DockerHeroku
DockerfileBuildPack
ImageSlug
ContainerDyno

Project Setup

Make a project directory, create and activate a new virtual environment, and install Django:

$ mkdir django-heroku-docker
$ cd django-heroku-docker

$ python3.10 -m venv env
$ source env/bin/activate

(env)$ pip install django==3.2.9

Feel free to swap out virtualenv and Pip for Poetry or Pipenv. For more, review Modern Python Environments.

Next, create a new Django project, apply the migrations, and run the server:

(env)$ django-admin startproject hello_django .
(env)$ python manage.py migrate
(env)$ python manage.py runserver

Navigate to http://localhost:8000/ to view the Django welcome screen. Kill the server and exit from the virtual environment once done.

Docker

Add a Dockerfile to the project root:

# pull official base image
FROM python:3.10-alpine

# set work directory
WORKDIR /app

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG 0

# install psycopg2
RUN apk update \
    && apk add --virtual build-essential gcc python3-dev musl-dev \
    && apk add postgresql-dev \
    && pip install psycopg2

# install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt

# copy project
COPY . .

# add and run as non-root user
RUN adduser -D myuser
USER myuser

# run gunicorn
CMD gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT

Here, we started with an Alpine-based Docker image for Python 3.10. We then set a working directory along with two environment variables:

  1. PYTHONDONTWRITEBYTECODE: Prevents Python from writing pyc files to disc
  2. PYTHONUNBUFFERED: Prevents Python from buffering stdout and stderr

Next, we installed system-level dependencies and Python packages, copied over the project files, created and switched to a non-root user (which is recommended by Heroku), and used CMD to run Gunicorn when a container spins up at runtime. Take note of the $PORT variable. Essentially, any web server that runs on the Container Runtime must listen for HTTP traffic at the $PORT environment variable, which is set by Heroku at runtime.

Create a requirements.txt file:

Django==3.2.9
gunicorn==20.1.0

Then add a .dockerignore file:

__pycache__
*.pyc
env/
db.sqlite3

Update the SECRET_KEY, DEBUG, and ALLOWED_HOSTS variables in settings.py:

SECRET_KEY = os.environ.get('SECRET_KEY', default='foo')

DEBUG = int(os.environ.get('DEBUG', default=0))

ALLOWED_HOSTS = ['localhost', '127.0.0.1']

Don't forget the import:

import os

To test locally, build the image and run the container, making sure to pass in the appropriate environment variables:

$ docker build -t web:latest .
$ docker run -d --name django-heroku -e "PORT=8765" -e "DEBUG=1" -p 8007:8765 web:latest

Ensure then app is running at http://localhost:8007/ in your browser. Stop then remove the running container once done:

$ docker stop django-heroku
$ docker rm django-heroku

Add a .gitignore:

__pycache__
*.pyc
env/
db.sqlite3

Next, let's create a quick Django view to easily test the app when debug mode is off.

Add a views.py file to the "hello_django" directory:

from django.http import JsonResponse


def ping(request):
    data = {'ping': 'pong!'}
    return JsonResponse(data)

Next, update urls.py:

from django.contrib import admin
from django.urls import path

from .views import ping


urlpatterns = [
    path('admin/', admin.site.urls),
    path('ping/', ping, name="ping"),
]

Test this again with debug mode off:

$ docker build -t web:latest .
$ docker run -d --name django-heroku -e "PORT=8765" -e "DEBUG=0" -p 8007:8765 web:latest

Verify http://localhost:8007/ping/ works as expected:

{
  "ping": "pong!"
}

Stop then remove the running container once done:

$ docker stop django-heroku
$ docker rm django-heroku

WhiteNoise

If you'd like to use WhiteNoise to manage your static assets, first add the package to the requirements.txt file:

Django==3.2.9
gunicorn==20.1.0
whitenoise==5.3.0

Update the middleware in settings.py like so:

MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'whitenoise.middleware.WhiteNoiseMiddleware',  # new
    'django.contrib.sessions.middleware.SessionMiddleware',
    'django.middleware.common.CommonMiddleware',
    'django.middleware.csrf.CsrfViewMiddleware',
    'django.contrib.auth.middleware.AuthenticationMiddleware',
    'django.contrib.messages.middleware.MessageMiddleware',
    'django.middleware.clickjacking.XFrameOptionsMiddleware',
]

Then configure the handling of your staticfiles with STATIC_ROOT:

STATIC_ROOT = BASE_DIR / 'staticfiles'

FInally, add compression and caching support:

STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

Add the collectstatic command to the Dockerfile:

# pull official base image
FROM python:3.10-alpine

# set work directory
WORKDIR /app

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG 0

# install psycopg2
RUN apk update \
    && apk add --virtual build-essential gcc python3-dev musl-dev \
    && apk add postgresql-dev \
    && pip install psycopg2

# install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt

# copy project
COPY . .

# collect static files
RUN python manage.py collectstatic --noinput

# add and run as non-root user
RUN adduser -D myuser
USER myuser

# run gunicorn
CMD gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT

To test, build the new image and spin up a new container:

$ docker build -t web:latest .
$ docker run -d --name django-heroku -e "PORT=8765" -e "DEBUG=1" -p 8007:8765 web:latest

You should be able to view the static files when you run:

$ docker exec django-heroku ls /app/staticfiles
$ docker exec django-heroku ls /app/staticfiles/admin

Stop then remove the running container again:

$ docker stop django-heroku
$ docker rm django-heroku

Postgres

To get Postgres up and running, we'll use the dj_database_url package to generate the proper database configuration dictionary for the Django settings based on a DATABASE_URL environment variable.

Add the dependency to the requirements file:

Django==3.2.9
dj-database-url==0.5.0
gunicorn==20.1.0
whitenoise==5.3.0

Then, make the following changes to the settings to update the database configuration if the DATABASE_URL is present:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': BASE_DIR / 'db.sqlite3',
    }
}

DATABASE_URL = os.environ.get('DATABASE_URL')
db_from_env = dj_database_url.config(default=DATABASE_URL, conn_max_age=500, ssl_require=True)
DATABASES['default'].update(db_from_env)

So, if the DATABASE_URL is not present, SQLite will still be used.

Add the import to the top as well:

import dj_database_url

We'll test this out in a bit after we spin up a Postgres database on Heroku.

Heroku Setup

Sign up for Heroku account (if you don’t already have one), and then install the Heroku CLI (if you haven't already done so).

Create a new app:

$ heroku create
Creating app... done, ⬢ limitless-atoll-51647
https://limitless-atoll-51647.herokuapp.com/ | https://git.heroku.com/limitless-atoll-51647.git

Add the SECRET_KEY environment variable:

$ heroku config:set SECRET_KEY=SOME_SECRET_VALUE -a limitless-atoll-51647

Change SOME_SECRET_VALUE to a randomly generated string that's at least 50 characters.

Add the above Heroku URL to the list of ALLOWED_HOSTS in hello_django/settings.py like so:

ALLOWED_HOSTS = ['localhost', '127.0.0.1', 'limitless-atoll-51647.herokuapp.com']

Make sure to replace limitless-atoll-51647 in each of the above commands with the name of your app.

Heroku Docker Deployment

At this point, we're ready to start deploying Docker images to Heroku. Did you decide which approach you'd like to take?

  1. Container Registry: deploy pre-built Docker images to Heroku
  2. Build Manifest: given a Dockerfile, Heroku builds and deploys the Docker image

Unsure? Try them both!

Approach #1: Container Registry

Skip this section if you're using the Build Manifest approach.

Again, with this approach, you can deploy pre-built Docker images to Heroku.

Log in to the Heroku Container Registry, to indicate to Heroku that we want to use the Container Runtime:

$ heroku container:login

Re-build the Docker image and tag it with the following format:

registry.heroku.com/<app>/<process-type>

Make sure to replace <app> with the name of the Heroku app that you just created and <process-type> with web since this will be for a web process.

For example:

$ docker build -t registry.heroku.com/limitless-atoll-51647/web .

Push the image to the registry:

$ docker push registry.heroku.com/limitless-atoll-51647/web

Release the image:

$ heroku container:release -a limitless-atoll-51647 web

This will run the container. You should be able to view the app at https://APP_NAME.herokuapp.com. It should return a 404.

Try running heroku open -a limitless-atoll-51647 to open the app in your default browser.

Verify https://APP_NAME.herokuapp.com/ping works as well:

{
  "ping": "pong!"
}

You should also be able to view the static files:

$ heroku run ls /app/staticfiles -a limitless-atoll-51647
$ heroku run ls /app/staticfiles/admin -a limitless-atoll-51647

Make sure to replace limitless-atoll-51647 in each of the above commands with the name of your app.

Jump down to the "Postgres Test" section once done.

Approach #2: Build Manifest

Skip this section if you're using the Container Registry approach.

Again, with the Build Manifest approach, you can have Heroku build and deploy Docker images based on a heroku.yml manifest file.

Set the Stack of your app to container:

$ heroku stack:set container -a limitless-atoll-51647

Add a heroku.yml file to the project root:

build:
  docker:
    web: Dockerfile

Here, we're just telling Heroku which Dockerfile to use for building the image.

Along with build, you can also define the following stages:

  • setup is used to define Heroku addons and configuration variables to create during app provisioning.
  • release is used to define tasks that you'd like to execute during a release.
  • run is used to define which commands to run for the web and worker processes.

Be sure to review the Heroku documentation to learn more about these four stages.

It's worth noting that the gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT command could be removed from the Dockerfile and added to the heroku.yml file under the run stage:

build:
  docker:
    web: Dockerfile
run:
  web: gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT

Also, be sure to place the 'collectstatic' command inside your Dockerfile. Don't move it to the release stage. For more on this, review this Stack Overflow question.

Next, install the heroku-manifest plugin from the beta CLI channel:

$ heroku update beta
$ heroku plugins:install @heroku-cli/plugin-manifest

With that, initialize a Git repo and create a commit.

Then, add the Heroku remote:

$ heroku git:remote -a limitless-atoll-51647

Push the code up to Heroku to build the image and run the container:

$ git push heroku master

You should be able to view the app at https://APP_NAME.herokuapp.com. It should return a 404.

Try running heroku open -a limitless-atoll-51647 to open the app in your default browser.

Verify https://APP_NAME.herokuapp.com/ping works as well:

{
  "ping": "pong!"
}

You should also be able to view the static files:

$ heroku run ls /app/staticfiles -a limitless-atoll-51647
$ heroku run ls /app/staticfiles/admin -a limitless-atoll-51647

Make sure to replace limitless-atoll-51647 in each of the above commands with the name of your app.

Postgres Test

Create the database:

$ heroku addons:create heroku-postgresql:hobby-dev -a limitless-atoll-51647

This command automatically sets the DATABASE_URL environment variable for the container.

Once the database is up, run the migrations:

$ heroku run python manage.py makemigrations -a limitless-atoll-51647
$ heroku run python manage.py migrate -a limitless-atoll-51647

Then, jump into psql to view the newly created tables:

$ heroku pg:psql -a limitless-atoll-51647

# \dt
                      List of relations
 Schema |            Name            | Type  |     Owner
--------+----------------------------+-------+----------------
 public | auth_group                 | table | siodzhzzcvnwwp
 public | auth_group_permissions     | table | siodzhzzcvnwwp
 public | auth_permission            | table | siodzhzzcvnwwp
 public | auth_user                  | table | siodzhzzcvnwwp
 public | auth_user_groups           | table | siodzhzzcvnwwp
 public | auth_user_user_permissions | table | siodzhzzcvnwwp
 public | django_admin_log           | table | siodzhzzcvnwwp
 public | django_content_type        | table | siodzhzzcvnwwp
 public | django_migrations          | table | siodzhzzcvnwwp
 public | django_session             | table | siodzhzzcvnwwp
(10 rows)

# \q

Again, make sure to replace limitless-atoll-51647 in each of the above commands with the name of your Heroku app.

GitLab CI

Sign up for a GitLab account (if necessary), and then create a new project (again, if necessary).

Retrieve your Heroku auth token:

$ heroku auth:token

Then, save the token as a new variable called HEROKU_AUTH_TOKEN within your project's CI/CD settings: Settings > CI / CD > Variables.

gitlab config

Next, we need to add a GitLab CI/CD config file called .gitlab-ci.yml to the project root. The contents of this file will vary based on the approach used.

Approach #1: Container Registry

Skip this section if you're using the Build Manifest approach.

.gitlab-ci.yml:

image: docker:stable
services:
  - docker:dind

variables:
  DOCKER_DRIVER: overlay2
  HEROKU_APP_NAME: <APP_NAME>
  HEROKU_REGISTRY_IMAGE: registry.heroku.com/${HEROKU_APP_NAME}/web

stages:
  - build_and_deploy

build_and_deploy:
  stage: build_and_deploy
  script:
    - apk add --no-cache curl
    - docker login -u _ -p $HEROKU_AUTH_TOKEN registry.heroku.com
    - docker pull $HEROKU_REGISTRY_IMAGE || true
    - docker build
      --cache-from $HEROKU_REGISTRY_IMAGE
      --tag $HEROKU_REGISTRY_IMAGE
      --file ./Dockerfile
      "."
    - docker push $HEROKU_REGISTRY_IMAGE
    - chmod +x ./release.sh
    - ./release.sh

release.sh:

#!/bin/sh


IMAGE_ID=$(docker inspect ${HEROKU_REGISTRY_IMAGE} --format={{.Id}})
PAYLOAD='{"updates": [{"type": "web", "docker_image": "'"$IMAGE_ID"'"}]}'

curl -n -X PATCH https://api.heroku.com/apps/$HEROKU_APP_NAME/formation \
  -d "${PAYLOAD}" \
  -H "Content-Type: application/json" \
  -H "Accept: application/vnd.heroku+json; version=3.docker-releases" \
  -H "Authorization: Bearer ${HEROKU_AUTH_TOKEN}"

Here, we defined a single build_and_deploy stage where we:

  1. Install cURL
  2. Log in to the Heroku Container Registry
  3. Pull the previously pushed image (if it exists)
  4. Build and tag the new image
  5. Push the image up to the registry
  6. Create a new release via the Heroku API using the image ID within the release.sh script

Make sure to replace <APP_NAME> with your Heroku app's name.

With that, initialize a Git repo, commit, add the GitLab remote, and push your code up to GitLab to trigger a new pipeline. This will run the build_and_deploy stage as a single job. Once complete, a new release should automatically be created on Heroku.

Approach #2: Build Manifest

Skip this section if you're using the Container Registry approach.

.gitlab-ci.yml:

variables:
  HEROKU_APP_NAME: <APP_NAME>

stages:
  - deploy

deploy:
  stage: deploy
  script:
    - apt-get update -qy
    - apt-get install -y ruby-dev
    - gem install dpl
    - dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_AUTH_TOKEN

Here, we defined a single deploy stage where we:

  1. Install Ruby along with a gem called dpl
  2. Deploy the code to Heroku with dpl

Make sure to replace <APP_NAME> with your Heroku app's name.

Commit, add the GitLab remote, and push your code up to GitLab to trigger a new pipeline. This will run the deploy stage as a single job. Once complete, the code should be deployed to Heroku.

Advanced CI

Rather than just building the Docker image and creating a release on GitLab CI, let's also run the Django tests, Flake8, Black, and isort.

Again, this will vary depending on the approach you used.

Approach #1: Container Registry

Skip this section if you're using the Build Manifest approach.

Update .gitlab-ci.yml like so:

stages:
  - build
  - test
  - deploy

variables:
  IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}

build:
  stage: build
  image: docker:stable
  services:
    - docker:dind
  variables:
    DOCKER_DRIVER: overlay2
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker pull $IMAGE:latest || true
    - docker build
      --cache-from $IMAGE:latest
      --tag $IMAGE:latest
      --file ./Dockerfile
      "."
    - docker push $IMAGE:latest

test:
  stage: test
  image: $IMAGE:latest
  services:
    - postgres:latest
  variables:
    POSTGRES_DB: test
    POSTGRES_USER: runner
    POSTGRES_PASSWORD: ""
    DATABASE_URL: postgresql://runner@postgres:5432/test
  script:
    - python manage.py test
    - flake8 hello_django --max-line-length=100
    - black hello_django --check
    - isort hello_django --check --profile black

deploy:
  stage: deploy
  image: docker:stable
  services:
    - docker:dind
  variables:
    DOCKER_DRIVER: overlay2
    HEROKU_APP_NAME: <APP_NAME>
    HEROKU_REGISTRY_IMAGE: registry.heroku.com/${HEROKU_APP_NAME}/web
  script:
    - apk add --no-cache curl
    - docker login -u _ -p $HEROKU_AUTH_TOKEN registry.heroku.com
    - docker pull $HEROKU_REGISTRY_IMAGE || true
    - docker build
      --cache-from $HEROKU_REGISTRY_IMAGE
      --tag $HEROKU_REGISTRY_IMAGE
      --file ./Dockerfile
      "."
    - docker push $HEROKU_REGISTRY_IMAGE
    - chmod +x ./release.sh
    - ./release.sh

Make sure to replace <APP_NAME> with your Heroku app's name.

So, we now have three stages: build, test, and deploy.

In the build stage, we:

  1. Log in to the GitLab Container Registry
  2. Pull the previously pushed image (if it exists)
  3. Build and tag the new image
  4. Push the image up to the GitLab Container Registry

Then, in the test stage we configure Postgres, set the DATABASE_URL environment variable, and then run the Django tests, Flake8, Black, and isort using the image that was built in the previous stage.

In the deploy stage, we:

  1. Install cURL
  2. Log in to the Heroku Container Registry
  3. Pull the previously pushed image (if it exists)
  4. Build and tag the new image
  5. Push the image up to the registry
  6. Create a new release via the Heroku API using the image ID within the release.sh script

Add the new dependencies to the requirements file:

# prod
Django==3.2.9
dj-database-url==0.5.0
gunicorn==20.1.0
whitenoise==5.3.0

# dev and test
black==21.11b1
flake8==4.0.1
isort==5.10.1

Before pushing up to GitLab, run the Django tests locally:

$ source env/bin/activate
(env)$ pip install -r requirements.txt
(env)$ python manage.py test

System check identified no issues (0 silenced).

----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK

Ensure Flake8 passes, and then update the source code based on the Black and isort recommendations:

(env)$ flake8 hello_django --max-line-length=100
(env)$ black hello_django
(env)$ isort hello_django --profile black

Commit and push your code yet again. Ensure all stages pass.

Approach #2: Build Manifest

Skip this section if you're using the Container Registry approach.

Update .gitlab-ci.yml like so:

stages:
  - build
  - test
  - deploy

variables:
  IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}

build:
  stage: build
  image: docker:stable
  services:
    - docker:dind
  variables:
    DOCKER_DRIVER: overlay2
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker pull $IMAGE:latest || true
    - docker build
      --cache-from $IMAGE:latest
      --tag $IMAGE:latest
      --file ./Dockerfile
      "."
    - docker push $IMAGE:latest

test:
  stage: test
  image: $IMAGE:latest
  services:
    - postgres:latest
  variables:
    POSTGRES_DB: test
    POSTGRES_USER: runner
    POSTGRES_PASSWORD: ""
    DATABASE_URL: postgresql://runner@postgres:5432/test
  script:
    - python manage.py test
    - flake8 hello_django --max-line-length=100
    - black hello_django --check
    - isort hello_django --check --profile black

deploy:
  stage: deploy
  variables:
    HEROKU_APP_NAME: <APP_NAME>
  script:
    - apt-get update -qy
    - apt-get install -y ruby-dev
    - gem install dpl
    - dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_AUTH_TOKEN

Make sure to replace <APP_NAME> with your Heroku app's name.

So, we now have three stages: build, test, and deploy.

In the build stage, we:

  1. Log in to the GitLab Container Registry
  2. Pull the previously pushed image (if it exists)
  3. Build and tag the new image
  4. Push the image up to the GitLab Container Registry

Then, in the test stage we configure Postgres, set the DATABASE_URL environment variable, and then run the Django tests, Flake8, Black, and isort using the image that was built in the previous stage.

In the deploy stage, we:

  1. Install Ruby along with a gem called dpl
  2. Deploy the code to Heroku with dpl

Add the new dependencies to the requirements file:

# prod
Django==3.2.9
dj-database-url==0.5.0
gunicorn==20.1.0
whitenoise==5.3.0

# dev and test
black==21.11b1
flake8==4.0.1
isort==5.10.1

Before pushing up to GitLab, run the Django tests locally:

$ source env/bin/activate
(env)$ pip install -r requirements.txt
(env)$ python manage.py test

System check identified no issues (0 silenced).

----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK

Ensure Flake8 passes, and then update the source code based on the Black and isort recommendations:

(env)$ flake8 hello_django --max-line-length=100
(env)$ black hello_django
(env)$ isort hello_django --profile black

Commit and push your code yet again. Ensure all stages pass.

Multi-stage Docker Build

Finally, update the Dockerfile like so to use a multi-stage build in order to reduce the final image size:

FROM python:3.10-alpine AS build-python
RUN apk update && apk add --virtual build-essential gcc python3-dev musl-dev postgresql-dev
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt .
RUN pip install -r requirements.txt

FROM python:3.10-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG 0
ENV PATH="/opt/venv/bin:$PATH"
COPY --from=build-python /opt/venv /opt/venv
RUN apk update && apk add --virtual build-deps gcc python3-dev musl-dev postgresql-dev
RUN pip install psycopg2-binary
WORKDIR /app
COPY . .
RUN python manage.py collectstatic --noinput
RUN adduser -D myuser
USER myuser
CMD gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT

Next, we need to update the GitLab config to take advantage of Docker layer caching.

Approach #1: Container Registry

Skip this section if you're using the Build Manifest approach.

.gitlab-ci.yml:

stages:
  - build
  - test
  - deploy

variables:
  IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
  HEROKU_APP_NAME: <APP_NAME>
  HEROKU_REGISTRY_IMAGE: registry.heroku.com/${HEROKU_APP_NAME}/web

build:
  stage: build
  image: docker:stable
  services:
    - docker:dind
  variables:
    DOCKER_DRIVER: overlay2
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker pull $IMAGE:build-python || true
    - docker pull $IMAGE:production || true
    - docker build
      --target build-python
      --cache-from $IMAGE:build-python
      --tag $IMAGE:build-python
      --file ./Dockerfile
      "."
    - docker build
      --cache-from $IMAGE:production
      --tag $IMAGE:production
      --tag $HEROKU_REGISTRY_IMAGE
      --file ./Dockerfile
      "."
    - docker push $IMAGE:build-python
    - docker push $IMAGE:production

test:
  stage: test
  image: $IMAGE:production
  services:
    - postgres:latest
  variables:
    POSTGRES_DB: test
    POSTGRES_USER: runner
    POSTGRES_PASSWORD: ""
    DATABASE_URL: postgresql://runner@postgres:5432/test
  script:
    - python manage.py test
    - flake8 hello_django --max-line-length=100
    - black hello_django --check
    - isort hello_django --check --profile black

deploy:
  stage: deploy
  image: docker:stable
  services:
    - docker:dind
  variables:
    DOCKER_DRIVER: overlay2
  script:
    - apk add --no-cache curl
    - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker pull $IMAGE:build-python || true
    - docker pull $IMAGE:production || true
    - docker build
      --target build-python
      --cache-from $IMAGE:build-python
      --tag $IMAGE:build-python
      --file ./Dockerfile
      "."
    - docker build
      --cache-from $IMAGE:production
      --tag $IMAGE:production
      --tag $HEROKU_REGISTRY_IMAGE
      --file ./Dockerfile
      "."
    - docker push $IMAGE:build-python
    - docker push $IMAGE:production
    - docker login -u _ -p $HEROKU_AUTH_TOKEN registry.heroku.com
    - docker push $HEROKU_REGISTRY_IMAGE
    - chmod +x ./release.sh
    - ./release.sh

Make sure to replace <APP_NAME> with your Heroku app's name.

Review the changes on your own. Then, test it out one last time.

For more on this caching pattern, review the "Multi-stage" section from the Faster CI Builds with Docker Cache article.

Approach #2: Build Manifest

Skip this section if you're using the Container Registry approach.

.gitlab-ci.yml:

stages:
  - build
  - test
  - deploy

variables:
  IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
  HEROKU_APP_NAME: <APP_NAME>

build:
  stage: build
  image: docker:stable
  services:
    - docker:dind
  variables:
    DOCKER_DRIVER: overlay2
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker pull $IMAGE:build-python || true
    - docker pull $IMAGE:production || true
    - docker build
      --target build-python
      --cache-from $IMAGE:build-python
      --tag $IMAGE:build-python
      --file ./Dockerfile
      "."
    - docker build
      --cache-from $IMAGE:production
      --tag $IMAGE:production
      --file ./Dockerfile
      "."
    - docker push $IMAGE:build-python
    - docker push $IMAGE:production

test:
  stage: test
  image: $IMAGE:production
  services:
    - postgres:latest
  variables:
    POSTGRES_DB: test
    POSTGRES_USER: runner
    POSTGRES_PASSWORD: ""
    DATABASE_URL: postgresql://runner@postgres:5432/test
  script:
    - python manage.py test
    - flake8 hello_django --max-line-length=100
    - black hello_django --check
    - isort hello_django --check --profile black

deploy:
  stage: deploy
  script:
    - apt-get update -qy
    - apt-get install -y ruby-dev
    - gem install dpl
    - dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_AUTH_TOKEN

Make sure to replace <APP_NAME> with your Heroku app's name.

Review the changes on your own. Then, test it out one last time.

For more on this caching pattern, review the "Multi-stage" section from the Faster CI Builds with Docker Cache article.

Conclusion

In this article, we walked through two approaches for deploying a Django app to Heroku with Docker -- the Container Registry and Build Manifest.

So, when should you think about using the Heroku Container Runtime over the traditional Git and slug compiler for deployments?

When you need more control over the production deployment environment.

Examples:

  1. Your application and dependencies exceed the 500MB maximum slug limit.
  2. Your application requires packages not installed by the regular Heroku buildpacks.
  3. You want greater assurance that your application will behave the same in development as it does in production.
  4. You really, really enjoy working with Docker.

--

You can find the code in the following repositories on GitLab:

  1. Container Registry Approach - django-heroku-docker
  2. Build Manifest Aproach - django-heroku-docker-build-manifest

Best!

Original article source at: https://testdriven.io/

#django #heroku #docker 

How to Deploying Django to Heroku With Docker
Gordon  Matlala

Gordon Matlala

1668499467

Heroku Alternatives for Python-based Applications

Heroku changed how developers build and deploy software, making building, deploying, and scaling applications easier and faster. They set various standards and methodologies -- namely, the Twelve-Factor App -- for how cloud services should be managed, which are still highly relevant today for microservices-based and cloud-native applications. Unfortunately, starting November 28, 2022, Heroku will discontinue its free tier. This means you'll no longer be able to leverage free dynos, Postgres databases, and Redis instances.

For more on Heroku's discontinuation of its free product tiers, check out Heroku's Next Chapter and Deprecation of Heroku Free Resources.

In this article, you'll learn what the best Heroku alternatives (and their pros and cons) are.

What is Heroku?

Heroku, which was founded in 2007, is a cloud Platform as a Service (PaaS) that provides hosting for web applications. They offer abstracted environments where you don't have to manage the underlying infrastructure, making it easy to manage, deploy, and scale web applications. With just a few clicks you can have your app up and running, ready to receive traffic.

Before Heroku hit the scene, the process of running a web application was quite challenging, mostly reserved for seasoned SysOps professionals rather than developers. Heroku provides an opinionated layer, abstracting away much of the configuration required for a web server. The majority of web applications could (and still can) leverage such an environment, so smaller companies and teams can focus on application development rather than configuring web servers, installing Linux packages, setting up load balancers, and everything else that goes along with infrastructure management on a traditional server.

Heroku's Pros and Cons

Despite Heroku's popularity, it has received quite a lot of criticism throughout the years.

If you're already familiar with Heroku, feel free to skip this section.

Pros

Ease of Use

Heroku is arguably the most user-friendly PaaS platform. Rather than spending days setting up and configuring web servers and the underlying infrastructure, you simply define the commands required to run your web application and Heroku does the rest for you. You can literally have your app up and running in minutes!

Plus, Heroku leverages git for versioning and deploying apps, which makes it easy to deploy and roll back.

Finally, unlike most PaaS platforms, Heroku provides excellent error logs, making debugging relatively easy.

Popularity

For the first five years of its existence, Heroku had few competitors. Their user/developer experience was just so far ahead of everyone else that it took a while for companies to adapt. This, coupled with their vast free tier, meant that the majority of developer-focused tutorials used Heroku for their deployment platform. Even to this day, the vast majority of web development tutorials, books, and courses still leverage Heroku for deployment.

Heroku also has first-class support for some of the most popular languages and runtimes (via buildpacks) like Python, Ruby, Node.js, PHP, Go, Java, Scala, and Clojure. While those are the officially supported languages, you can still bring your own language or custom runtime to the Heroku platform.

Integrations and Add-ons

Often overlooked, Heroku provides access to hundreds of add-on tools and services -- everything from data storage and caching to monitoring and analytics to data and video processing. With a click of a button, you can extend your app by provisioning a third-party cloud service all without having to manually install or configure it.

Scaling

Heroku allows developers to easily scale their apps both vertically and horizontally. Scaling can be achieved via Heroku's dashboard or CLI. Additionally, if you're running more performant dynos you can leverage the free auto-scaling feature, which increases the number of web dynos depending on the current traffic.

Cons

Cost

Heroku is rather expensive compared to other PaaS on the market. While their starting plan is $7 per dyno per month, as your app scales, you're quickly going to have to upgrade to better dynos, which cost quite a lot of money. Due to the price of more performant dynos, Heroku might not be appropriate for large, high-traffic apps.

Heroku compared to AWS EC2 is about five times more expensive.

Keep in mind, though, that Heroku is a PaaS that does a lot of the heavy lifting for you, while EC2 is just a Linux instance that you have to manage yourself.

Lack of Control and Flexibility

Heroku doesn't offer enough control and lacks transparency. By using their service, you're going to be highly dependent on their tech stack and design decisions. Some of their limitations hinder scalability -- e.g., an application can only listen on a single port, functions have a max source code size of 500 MB, and there's no way to fine-tune your database. Heroku is also highly dependent on AWS, which means that if an AWS region is down, your service (hosted in that region) is also going to be down.

Similarly, Heroku is really designed for your run-of-the-mill RESTful APIs. If your app includes heavy computing or you need to tweak the infrastructure to meet your specific needs, Heroku may not be a good fit.

Lack of Regions

Heroku offers two types of runtimes:

  1. Common Runtime - for non-enterprise users
  2. Private Spaces Runtime - for enterprise users

The Common Runtime only supports two regions, US and EU, while the Private Spaces Runtime supports 6 regions.

This means that if you're not an enterprise user you'll only be able to host your app in the US (Virginia) or EU region (Dublin, Ireland).

$ heroku regions

ID         Location                 Runtime
─────────  ───────────────────────  ──────────────
eu         Europe                   Common Runtime
us         United States            Common Runtime
dublin     Dublin, Ireland          Private Spaces
frankfurt  Frankfurt, Germany       Private Spaces
oregon     Oregon, United States    Private Spaces
sydney     Sydney, Australia        Private Spaces
tokyo      Tokyo, Japan             Private Spaces
virginia   Virginia, United States  Private Spaces

Lack of New Features

In today's world, development trends change faster than ever. This forces hosting services to follow the trends to attract teams looking for cutting-edge technology. Many of Heroku's competitors, which we'll address here shortly, are advancing and adding new features like serverless, edge computing, etc. Heroku, on the other hand, has embraced stability over feature development. This doesn't mean they aren't adding new features; they are just adding new features much slower than some of their competitors.

If you want to see what's coming next to Heroku, take a look at their roadmap.

Lock In

Once you're running production code on Heroku it's difficult to migrate to a different hosting provider.

Keep in mind that if you move away from a PaaS, you'll have to handle all the things that Heroku handled yourself, so be prepared to make a SysAdmin or DevOps hire or two.

Heroku's Core Features

In this section, we'll look at Heroku's core features so you can understand what to look for as you look for alternatives.

Again, feel free to skip this section if you're already familiar with Heroku's features.

FeatureDescription
Heroku RuntimeThe Heroku Runtime is responsible for provisioning and orchestrating dynos, managing and monitoring the lifecycle of your dynos, providing proper network configuration, HTTP routing, log aggregation, and much more.
CI/CD systemEasy-to-use CI/CD, which takes care of building, testing, deploying, incremental app updates, and more.
git-based deploymentsManages app deployments with git.
Data persistanceFully-managed data services, like Postgres, Redis, and Apache Kafka.
Scaling featuresEasy-to-use tools that enable developers to scale horizontally and vertically on demand.
Logging and app metricsLogging, monitoring, and application metrics.
Collaboration featuresEasy collaboration with others. Collaborators can deploy changes to your apps, scale them, and access their data, among other operations.
Add-onsHundreds of add-on tools and services – everything from data storage and caching to monitoring and analytics to data and video processing.

When looking for alternatives, you should prioritize the features. You're simply not going to find a 1:1 replacement for Heroku, so be sure to determine which features are "must-haves" vs "nice-to-haves".

For example:

Must-haves

  1. Solid UI
  2. Buildpacks
  3. git-based Deployments
  4. Battle-tested
  5. Simple scaling

Nice-to-haves

  1. Application and Infrastructure Monitoring
  2. Uses AWS
  3. Free tier
  4. Add-ons
  5. CI/CD System

Heroku Alternatives

Finally, in this section, we'll look at the best Heroku alternatives and what their pros and cons are.

DigitalOcean App Platform

App Platform is DigitalOcean's fully managed solution for deploying apps to the cloud. It has integrated CI/CD, which works well with both GitHub and GitLab. It natively supports popular languages and frameworks like Python, Node.js, Django, Go, and PHP. Alternatively, it allows you to deploy apps via Docker.

Other important features:

  • Horizontal and vertical scaling
  • Built-in alerts, monitoring, and insights
  • Zero downtime deployments and rollbacks

The platform's UI/UX is simple and straightforward, providing a similar feel to Heroku.

DigitalOcean App Platform starts at $5/month for 1 CPU and 512 MB of RAM. To learn more about their pricing take a look at the official pricing page.

Pros

  • Easy to use
  • One of the cheapest PaaS
  • Free plan that allows you to host up to 3 static sites
  • Decent regional support (8 regions)
  • SSL protection on hosted apps
  • DDoS mitigation

Cons

  • Relatively new PaaS (established in 2020)
  • Builds can often take up to 15 minutes
  • No support for recurring jobs (like cron)
  • Lack of documentation

Want to learn how to deploy a Django application to DigitalOcean's App Platform? Check out Running Django on DigitalOcean's App Platform.

Render

Render, which launched in 2019, is a great alternative to Heroku. It allows you to host static sites, web services, PostgreSQL databases, and Redis instances for absolutely free. Its extremely simple UI/UX and great git integration allow you to get an app running in minutes. It has native support for Python, Node.js, Ruby, Elixir, Go, and Rust. If none of these work for you, Render can also deploy via a Dockerfile.

Render's free auto-scaling feature will make sure that your app will always have the necessary resources at the right cost. Additionally, everything that's hosted on Render can also get a free TLS certificate.

Refer to their official documentation for more information about their free plans.

Pros

  • Great for beginners
  • Effortless to set up and deploy apps
  • Free tier
  • Budget-friendly compared to Heroku (~50% cheaper)
  • Automatic scaling based on real-time CPU and memory usage
  • Excellent customer support

Cons

  • Relatively new PaaS (established in 2019)
  • Limited regional support (only Oregon, Frankfurt, Ohio, and Singapore)
  • Free tier apps take pretty long to get up and running
  • No buildpacks (take a look at this question)
  • Lacks an add-ons ecosystem

Want to learn how to deploy a Flask application to Render? Check out Deploying a Flask App to Render.

Fly.io

Fly.io is a popular, flexible PaaS. Rather than reselling AWS or GCP services, they host your applications on top of physical dedicated servers that run all over the world. Because of that, they're able to offer cheaper hosting than other PaaS, like Heroku. Their main focus is to deploy apps as close to their customers as possible (you can pick between 22 regions). Fly.io supports three kinds of builders: Dockerfile, buildpacks, or pre-built Docker images.

They also offer scaling and auto-scaling features.

Fly.io takes a different approach to managing your resources compared to other PaaS. It doesn't come with a fancy management dashboard; instead, all the work is done via their CLI named flyctl.

Their free plan includes:

  • Up to 3 shared-cpu-1x 256 MB VMs
  • 3GB persistent volume storage (total)
  • 160GB outbound data transfer

That should be more than enough to run a few small apps to test their platform.

Pros

  • Free plan for small projects
  • Great regional support (22 regions at the time of writing)
  • Great documentation and fully documented API
  • Easy horizontal and vertical scaling

Cons

  • Can only be managed through a CLI (might not be appropriate for beginners)
  • No out-of-the-box GitHub or GitLab integration
  • Different pricing per region

Want to learn how to deploy a Django application on Fly.io? Check out Deploying a Django App to Fly.io.

Google App Engine

Google App Engine (GAE) is a fully managed, serverless platform for developing and hosting web applications at scale. It has a powerful built-in auto-scaling feature, which automatically allocates more/fewer resources based on demand. GAE natively supports applications written in Python, Node.js, Java, Ruby, C#, Go, and PHP. Alternatively, it provides support for other languages via custom runtimes or Dockerfiles.

It has powerful application diagnostics, which you can combine with Cloud Monitoring and Logging to monitor the health and the performance of your app.

Google offers $300 free credits for new customers, which can serve small apps for several years.

Pros

  • $300 free credit
  • Stable and tested, established in 2008
  • Powerful app diagnostics (can be combined with other GCP/Google services)
  • It can scale to zero, which means that you don't pay anything if no one uses your service
  • Great customer support

Cons

  • Pretty expensive
  • Steep learning curve if you're not familiar with GCP
  • Vendor lock-in due to Google's proprietary software
  • Their pricing could be more straight-forward

Platform.sh

Platform.sh is a platform-as-a-service built especially for continuous deployment. It allows you to host web applications on the cloud while making your development and testing workflows more productive. It has direct integration with GitHub, which allows developers to instantly deploy from GitHub repositories. It supports modern development languages, like Python, Java, PHP, and Go, as well as a number of different frameworks.

Platform.sh does not offer a free plan. Their developer plan (which isn't suitable for production) starts at $10/month. Their production-ready plans start at $50 monthly.

Pros

  • Great CI/CD and integration with GitHub
  • Your GitHub branches (dev/stage/production) are reflected on Platform.sh
  • Easily scale with auto-scaling
  • Good documentation
  • Excellent customer support

Cons

  • No free tier
  • Gets more and more expensive as your site grows
  • Might not be appropriate for small businesses

AWS Elastic Beanstalk

AWS Elastic Beanstalk (EB) is an easy-to-use service for deploying and scaling web applications. It connects multiple AWS services, like compute instances (EC2), databases (RDS), load balancers (Application Load Balancer), and file storage systems (S3), to name a few. EB allows you to quickly deploy apps written in Python, Go, Java, .Net, Node.js, PHP, and Ruby. It also supports Docker.

Elastic Beanstalk makes app deployment easier by abstracting away the underlying architecture, while still allowing low-level configuration of instances and databases. It integrates well with git and allows you to make incremental deployments. It also supports load balancing and auto-scaling.

The great thing about Elastic Beanstalk is that there's no additional charge for it. You only pay for the resources that your application consumes (EC2 instances, RDS, etc.).

Pros

  • Budget-friendly
  • Great if you're already familiar with AWS
  • Highly customizable, provides a high level of control
  • Automatic scaling and multiple availability zones to maximize your app’s reliability
  • Excellent support

Cons

  • Not appropriate for small projects
  • Relatively difficult to set up and operate compared to other PaaS providers
  • No deployment failure notifications
  • Complicated documentation

Want to learn how to deploy an application to Elastic Beanstalk? Check out our tutorials:

Microsoft Azure App Service

Azure App Service allows you to quickly and easily create enterprise-ready web and mobile apps for any platform or device and deploy them on scalable and reliable cloud infrastructure. It natively supports Python, .NET, .NET Core, Node.js, Java, PHP, and containers. They have built-in CI/CD and zero downtime deployments.

Other important features:

  • Log collection and failed request tracing for tracking and troubleshooting
  • Authentication using Azure Active Directory
  • Monitoring and alerts

If you're a new customer you can get $200 free credit to test Azure.

Pros

  • Integrates well with Visual Studio
  • Azure Autoscale can help you optimize cost
  • Built-in SSL/TLS certificate
  • Easy debugging and analyzing via Azure Monitor
  • Stable, 99.95% uptime

Cons

  • Expensive
  • Not the most intuitive PaaS
  • Steep learning curve if you're not familiar with Azure
  • Complicated documentation

Dokku on a DigitalOcean Droplet

Dokku claims to be the smallest PaaS implementation you've ever seen. It allows you to build and manage the lifecycle of applications from building to scaling. It's basically a mini-Heroku you can self-host on your Linux machine. Dokku is powered by Docker and integrates well with git.

Dokku offers a premium plan called Dokku PRO, which comes with a user-friendly interface and other features. You can learn more about it on their official website.

Dokku's minimal system requirement is 1 GB of memory. This means that you can host it on a DigitalOcean Droplet for $6 per month.

Pros

  • Completely free and open-source
  • Easy to deploy applications
  • Rich command-line interface
  • Variety of plugins

Cons

  • Has to be self-hosted
  • Requires initial configuration
  • Documentation could be improved
  • Scaling is not easy

Want to learn how to deploy a Django application on Dokku? Check out Deploying a Django App to Dokku on a DigitalOcean Droplet.

PythonAnywhere

PythonAnywhere is an online integrated development environment (IDE) and a web hosting service (PaaS) based on the Python programming language. It has out-of-the-box deployment options for Django, web2py, Flask, and Bottle. Compared to other PaaS on the list, PythonAnywhere behaves more like a traditional web server. You have access to its file system and can SSH into the console to view logs and what not.

It offers a free plan, which is great for beginners or people who'd just like to test different Python frameworks. The free plan allows you to host one web app at your_username.pythonanywhere.com. You can also use the free plan to spin up a MySQL instance.

Other relatively cheap paid plans that can be seen on their pricing page.

Pros

  • Free hosting for one small project
  • Easy to use, basically no learning curve
  • Pre-configured for Django, web2py, Flask, and Bottle
  • One-click free SSL
  • Great customer support

Cons

  • No CI/CD support
  • Only supports Python apps
  • No ASGI support
  • No auto-scaling

Engine Yard

Engine Yard is a PaaS solution allowing developers to plan, build, deploy, and manage applications in the cloud. Engine Yard also provides services for deployment, managing AWS, supporting databases, and microservices container development. Its main focus is Ruby on Rails, but it also supports other languages like Python, PHP, and Node.js.

Engine Yard simplifies app management on the cloud by automating stack updates and security patches to the hosted environment. It’s also possible to scale resources for your apps via application metrics.

Pros

  • Can deploy to any AWS zone
  • Quick and easy deployment
  • Automates database administration
  • Designed to scale
  • Good customer support

Cons

  • Expensive
  • Free trial only for 14 days
  • Python is not the main focus since they focus on Ruby on Rails apps

Vercel

Vercel is a cloud platform for static sites and serverless functions. It's mostly used for front-end projects, but it also supports Python, Node.js, Ruby, Go, and Docker. Vercel enables developers to host websites and web services that deploy instantly, scale automatically, and require little supervision -- all with no configuration. It also has a beautiful and intuitive UI.

Vercel offers a free plan, which includes:

  • 100 GB bandwidth
  • Built-in CI/CD
  • Automatic HTTPS/SSL
  • Previews for every git push

Pros

Cons

  • Doesn't support many frameworks
  • Python, Go, and Ruby can only be used as serverless functions
  • Does not offer much control

Netlify

Netlify is a cloud-based development platform for web developers and businesses. It allows developers to host static sites and serverless functions. It supports Python, Node.js, Go, PHP, Ruby, Rust, and Swift. It's undoubtedly one of the most used hosting platforms for front-end projects.

Netlify has an intuitive UI and is extremely easy to use because it doesn't require any configuration.

Its free plan includes:

  • 100 GB bandwidth
  • 300 build minutes per month
  • Live site previews
  • Instant rollbacks to any version
  • Deployment of static assets and dynamic serverless functions

Pros

Cons

  • Does not support many frameworks
  • Most of its natively supported languages can only be used as serverless functions
  • Limited control

Railway.app

Railway.app is a lesser-known infrastructure platform that allows you to provision infrastructure, develop with that infrastructure locally, and then deploy it to the cloud. It's made for every language no matter the project size.

Its features include:

  • Auto-scaling
  • Usage metrics
  • Automatic builds
  • Collaboration features

Pros

  • Good for developing and prototyping
  • Good integration with GitHub
  • Templates for pretty much every framework

Cons

  • A relatively new platform
  • Not as popular as the other PaaS solutions in this article
  • It's difficult to find anything Railway.app related due to the company name

Red Hat OpenShift

OpenShift is Red Hat's cloud computing PaaS offering. It's an application platform built on top of Kubernetes in the cloud where application developers and teams can build, test, deploy, and run their applications.

OpenShift has a seamless DevOps workflow, can scale both horizontally and vertically, and can auto-scale.

Pros

  • Stable, released in 2011
  • Strong integration with GitHub and Docker
  • Intuitive UI

Cons

  • Can be pretty expensive
  • Monitoring and troubleshooting could be improved
  • Slow customer support

Appliku

Appliku is a PaaS platform that uses your cloud servers to deploy your apps. You can link your DigitalOcean or AWS account and provision servers through Appliku's dashboard. While their primary focus is on Python-based apps, you can deploy apps built in any language by leveraging Docker. Appliku's pricing is based on the number of managed servers, so you can deploy as many apps as you need. They do offer a free tier.

Pros

  • Built specifically for Python/Django
  • Cost-efficient to run multiple apps
  • Use any cloud provider
  • CI/CD, GitHub and GitLab integration
  • Let's Encrypt integration
  • Easy access to server logs

Cons

  • Newer platform (2019)
  • Not as popular as some of the other platforms

Conclusion

Heroku is a mature, battle-tested, and stable platform. It does a lot of heavy lifting for you and will save you a lot of time and money, especially for small teams. Heroku allows you to focus on your product instead of fiddling with your server's configuration options and hiring a DevOps engineer or SysAdmin.

It may not be the cheapest option, but it's still one of the best PaaS on the market. Because of that, if you're already using Heroku, you should have a strong reason to move away from it.

While there are a number of alternatives on the market, none of them match Heroku's developer experience. At the moment the most promising alternatives to Heroku are DigitalOcean App Platform and Render. The problem with these two platforms is that they are relatively new and not (yet) battle-tested. If you're just looking for a place to host your apps for free, go with Render.

Original article source at: https://testdriven.io/

#python #heroku 

Heroku Alternatives for Python-based Applications
Lawrence  Lesch

Lawrence Lesch

1667916960

Papercups: Open-source Live Customer Chat

Papercups

Papercups is an open source live customer support tool web app written in Elixir. We offer a hosted version at app.papercups.io.

You can check out how our chat widget looks and play around with customizing it on our demo page. The chat widget component is also open sourced at github.com/papercups-io/chat-widget.

Watch how easy it is to get set up with our Slack integration 🚀 : slack-setup

One click Heroku deployment

The fastest way to get started is one click deploy on Heroku with:

Philosophy

We wanted to make a self-hosted customer support tool like Zendesk and Intercom for companies that have privacy and security concerns about having customer data going to third party services.

Features

  • Reply from email - use Papercups to answer support tickets via email
  • Reply from SMS - forward Twilio conversations and respond to SMS requests from Papercups
  • Custom chat widget - a customizable chat widget you can embed on your website to talk to your customers
  • React support - embed the chat widget as a React component, or a simple HTML snippet
  • React Native support - embed the chat widget in your React Native app
  • Flutter support - embed the chat widget in your Flutter app (courtesy of @aguilaair :heart:)
  • Slack integration - connect with Slack, so you can view and reply to messages directly from a Slack channel
  • Mattermost integration - connect with Mattermost, so you can view and reply to messages directly from Mattermost
  • Markdown and emoji support - use markdown and emoji to add character to your messages!
  • Invite your team - send invite links to your teammates to join your account
  • Conversation management - close, assign, and prioritize conversations
  • Built on Elixir - optimized for responsiveness, fault-tolerance, and support for realtime updates

Demo

We set up a simple page that demonstrates how Papercups works.

Try sending us a message to see what the chat experience is like!

Blog

Check out our blog for more updates and learnings

Documentation

Check out our docs at docs.papercups.io

Contributing

We ❤️ contributions big or small. See CONTRIBUTING.md for a guide on how to get started.

⚠️ Maintenance Mode

Papercups is in maintenance mode. This means there won't be any major new features in the near future. We will still accept pull requests and conduct major bug fixes. Read more here

Download Details:

Author: Papercups-io
Source Code: https://github.com/papercups-io/papercups 
License: MIT license

#typescript #javascript #react #heroku #slack #docker 

Papercups: Open-source Live Customer Chat
Hermann  Frami

Hermann Frami

1667692980

Deploy infinitely Scalable Serverless Apps, Apis, and Sites in Seconds

Up deploys infinitely scalable serverless apps, APIs, and static websites in seconds, so you can get back to working on what makes your product unique.

With Up there's no need to worry about managing or scaling machines, paying for idle servers, worrying about logging infrastructure or alerting. Just deploy your app with $ up and you're done!

Use the free OSS version, or subscribe to Up Pro for a small monthly fee for unlimited use within your company, there is no additional cost per team-member or application. Deploy dozens or even hundreds of applications for pennies thanks to AWS Lambda's cost effective nature.

About

Up focuses on deploying "vanilla" HTTP servers so there's nothing new to learn, just develop with your favorite existing frameworks such as Express, Koa, Django, Golang net/http or others.

Up currently supports Node.js, Golang, Python, Java, Crystal, Clojure and static sites out of the box. Up is platform-agnostic, supporting AWS Lambda and API Gateway as the first targets. You can think of Up as self-hosted Heroku style user experience for a fraction of the price, with the security, isolation, flexibility, and scalability of AWS.

Check out the documentation for more instructions and links, or try one of the examples, or chat with us in Slack.

OSS Features

Features of the free open-source edition.

Open source edition features

Pro Features

Up Pro provides additional features for production-ready applications such as encrypted environment variables, error alerting, unlimited team members, unlimited applications, priority email support, and global deployments for $19.99/mo USD. Visit Subscribing to Up Pro to get started.

Pro edition features

Quick Start

Install Up:

$ curl -sf https://up.apex.sh/install | sh

Create an app.js file:

require('http').createServer((req, res) => {  res.end('Hello World\n') }).listen(process.env.PORT)

Deploy the app:

$ up

Open it in the browser, or copy the url to your clipboard:

$ up url -o
$ up url -c

Download Details:

Author: Apex
Source Code: https://github.com/apex/up 
License: MIT license

#serverless #aws #nodejs #heroku #api #graphql 

Deploy infinitely Scalable Serverless Apps, Apis, and Sites in Seconds
Rust  Language

Rust Language

1667457144

How to Deploy a Rust Web Server to Heroku

Learn how to deploy a Rust web server using Axum, Tokio, and GitHub Actions to Heroku for your projects.

axum is an async web framework from the Tokio project. It is designed to be a very thin layer over hyper and is compatible with the Tower ecosystem, allowing the use of various middleware provided by tower-http and tower-web.

In this post, we will walk through how you can deploy a Rust web server using axum, Tokio, and GitHub Actions to Heroku for your projects.

Jump ahead:

Setting up a server using axum

axum provides a user-friendly interface to mount routes on a server and pass handler functions.

axum will handle listening to TCP sockets for connections and multiplexing HTTP requests to the correct handler and, as I mentioned, also allows the use of various middleware provided by the aforementioned Tower ecosystem.

use std::{net::SocketAddr, str::FromStr};

use axum::{
    http::StatusCode,
    response::IntoResponse,
    routing::get,
    Router,
    Server,
};


// running the top level future using tokio main
#[tokio::main]
async fn main() {
    // start the server
    run_server().await;
}
async fn run_server() {
    // Router is provided by Axum which allows mounting various routes and handlers.
    let app = Router::new()
        // `route` takes `/` and MethodRouter
        .route("/", 
        // get function create a MethodRouter for a `/` path from the `hello_world`
        get(hello_world))

    // create a socket address from the string address
    let addr = SocketAddr::from_str("0.0.0.0:8080").unwrap();
    // start the server on the address
    // Server is a re-export from the hyper::Server
    Server::bind(&addr)
    // start handling the request using this service
        .serve(app.into_make_service())
        // start polling the future
        .await
        .unwrap();
}

// basic handler that responds with a static string
// Handler function is an async function whose return type is anything that impl IntoResponse
async fn hello_world() -> impl IntoResponse {
    // returning a tuple with HTTP status and the body
    (StatusCode::OK, "hello world!")
}

Here, the Router struct provides a route method to add new routes and respective handlers. In the above example, get is used to create a get handler for the / route.

hello_world is a handler which returns a tuple with the HTTP status and body. This tuple has an implementation for the IntoResponse trait provided by axum.

The Server struct is a re-export of the hyper::Server. As axum attempts to be a very thin wrapper around hyper, you can expect it to provide performance comparable to hyper.

Handling POST requests

The post function is used to create a POST route on the provided path — as with the get function, post also takes a handler and returns MethodRoute.

let app = Router::new()
        // `route` takes `/` and MethodRouter
        .route("/", 
        // post function create a MethodRouter for a `/` path from the `hello_name`
        post(hello_name))

axum provides JSON serializing and deserializing right out of the box. The Json type implements both FromRequest and IntoResponse traits, allowing you to serialize responses and deserialize the request body.

// the input to our `hello_name` handler
// Deserialize trait is required for deserialising bytes to the struct
#[derive(Deserialize)]
struct Request {
    name: String,
}

// the output to our `hello_name` handler
// Serialize trait is required for serialising struct in bytes

#[derive(Serialize)]
struct Response{
    greet:String
}

The Request struct implements the Deserialize trait used by serde_json to deserialize the request body, while the Response struct implements the Serialize trait to serialize the response.

async fn hello_name(
    // this argument tells axum to parse the request body
    // as JSON into a `Request` type
    Json(payload): Json<Request>
) -> impl IntoResponse {
    // insert your application logic here
    let user = Response {
        greet:format!("hello {}",payload.name)
    };
    (StatusCode::CREATED, Json(user))
}

Json is a type provided by axum that internally implements the FromRequest trait and uses the serde and serde_json crate to deserialize the JSON body in the request to the Request struct.

Similar to the GET request handler, the POST handler can also return a tuple with the response status code and response body. Json also implements the IntoResponse trait, allowing it to convert the Response struct into a JSON response.

Extractors

Axum provides extractors as an abstraction to share state across your server and allows access of shared data to handlers.

// creating common state
let app_state = Arc::new(Mutex::new(HashMap::<String,()>::new()));

    let app = Router::new()
        // `GET /` goes to `root`
        .route("/", get(root))
        // `POST /users` goes to `create_user`
        .route("/hello", post(hello_name))
        // Adding the state to the router.
        .layer(Extension(app_state));

Extension wraps the shared state and is responsible for interacting with axum. In the above example, the shared state is wrapped in Arc and Mutex to synchronize the access to the inner state.

async fn hello_name(
    Json(payload): Json<Request>,
    // This will extract out the shared state
    Extension(db):Extension<Arc<Mutex<HashMap<String,()>>>>
) -> impl IntoResponse {
    let user = Response {
        greet:format!("hello {}",payload.name)
    };

   // we can use the shared state
    let mut s=db.lock().unwrap();
    s.insert(payload.name.clone(), ());
    (StatusCode::CREATED, Json(user))
}

Extension also implements the FromRequest trait that will be called by the axum to extract the shared state from the request and pass it to the handler functions.

GitHub Actions

GitHub Actions can be used to test, build, and deploy Rust applications. In this section, we will focus on deploying and testing Rust applications.

# name of the workflow
name: Rust

# run workflow when the condition is met
on:
# run when code is pushed on the `main` branch
  push:
    branches: [ "main" ]
# run when a pull request to the `main` branch
  pull_request:
    branches: [ "main" ]

# env variables
env:
  CARGO_TERM_COLOR: always

# jobs
jobs:
# job name
  build:
  # os to run the job on support macOS and windows also
    runs-on: ubuntu-latest
# steps for job
    steps:
    # this will get the code and set the git
    - uses: actions/checkout@v3
    # run the build
    - name: Build
    # using cargo to build
      run: cargo build --release

    # for deployment
    - name: make dir
    # create a directory
      run: mkdir app
    # put the app in it
    - name: copy
      run: mv ./target/release/axum-deom ./app/axum


    # heroku deployment
    - uses: akhileshns/heroku-deploy@v3.12.12
      with:
      # key from repository secrets
        heroku_api_key: ${{secrets.HEROKU_API_KEY}}
        # name of the Heroku app
        heroku_app_name: "axum-demo-try2"
        # email from which the app is uploaded
        heroku_email: "anshulgoel151999@gmail.com"

        # app directory
        appdir: "./app"

        # start command
        procfile: "web: ./axum"
        # buildpack is like environment used to run the app
        buildpack: "https://github.com/ph3nx/heroku-binary-buildpack.git"

GitHub Actions provide support to stable versions of Rust by default. Cargo and rustc are installed by default on all supported operating systems by GitHub Actions — this is an action that run when the code is pushed to the main branch or when a pull request to the main branch is created.

on:
# run when code is pushed on the `main` branch
  push:
    branches: [ "main" ]
# run when a pull request to the `main` branch
  pull_request:
    branches: [ "main" ]

The workflow will first check the code, and then run the Cargo test to run the test on the code. It will then build the code using cargo-build.

The Cargo release will create a binary in the target folder, and the Action then copies the binary from the target folder to the ./app folder for further use in the Heroku deployment step, which we will now proceed to.

Heroku deployment for Rust

Heroku doesn’t have an official buildpack for Rust, so there’s no official build environment for Rust apps with Heroku.

So instead, we will use GitHub Actions to build the app and deploy it to Heroku.

Heroku requires having a buildpack for each app, so binary-buildpack is used for Rust apps. There are community buildpacks for Rust, and since GitHub Actions are already being used to build the app, time can be saved by directly using the binary build on Heroku.

The GitHub Actions market has a very useful akhileshns/heroku-deploy that deploys the Heroku app using GitHub Actions. In combination with binary-buildpack, it becomes a powerful tool to deploy code.

    - uses: akhileshns/heroku-deploy@v3.12.12
      with:
      # key from repository secrets
        heroku_api_key: ${{secrets.HEROKU_API_KEY}}
        # name of the Heroku app
        heroku_app_name: "axum-demo-try2"
        # email from which the app is uploaded
        heroku_email: "anshulgoel151999@gmail.com"

        # app directory
        appdir: "./app"

        # start command
        procfile: "web: ./axum"
        # buildpack is like environment used to run the app
        buildpack: "https://github.com/ph3nx/heroku-binary-buildpack.git"

To use this Action, a Heroku API key is needed. The key can be generated using the Heroku console in your account settings.

This action will create the app and deploy it for you. It takes the directory of the app and starts the command for the app, and you can also specify the buildpack you’d like to use.

Some code changes are required before the Rust app can be deployed to Heroku. Currently, the app uses an 8080 port, but Heroku will provide a different port for the app to use, so the Rust app should read the environment variable PORT.

    // read the port from env or use the port default port(8080)
    let port = std::env::var("PORT").unwrap_or(String::from("8080"));
    // convert the port to a socket address
    let addr = SocketAddr::from_str(&format!("0.0.0.0:{}", port)).unwrap();
    // listen on the port
    Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();

Conclusion

axum is a very good web server framework with support for the wider tower-rs ecosystem. It allows the building of extensible and composable web services and offers performance benefits by offering a thin layer over hyper.

GitHub Actions are great for CI/CD and allow for performing various automated tasks, such as building and testing code and generating docs on various platforms. GitHub Actions also support caching cargo dependencies to speed up Actions.

Heroku comes with support to autoscale continuous deployment, as well as support for hosted resources like databases and storage, for example. GitHub Actions and Heroku are independent of the framework, meaning the same action can test and deploy a web server written in Rocket or Actix Web — so feel free to experiment with whatever suits you!

When all of these tools are used together, they become a killer combo for developing and hosting Rust web servers. I hope you enjoyed following along with this tutorial — leave a comment about your experience below.

Original article source at https://blog.logrocket.com

#rust #heroku #axum #tokio #githubactions

How to Deploy a Rust Web Server to Heroku