1679728440
Wayback is a tool that supports running as a command-line tool and docker container, purpose to snapshot webpage to time capsules.
wayback
) for convenient useThe simplest, cross-platform way is to download from GitHub Releases and place the executable file in your PATH.
From source:
go install github.com/wabarc/wayback/cmd/wayback@latest
From GitHub Releases:
curl -fsSL https://github.com/wabarc/wayback/raw/main/install.sh | sh
or via Bina:
curl -fsSL https://bina.egoist.dev/wabarc/wayback | sh
Using Snapcraft (on GNU/Linux)
sudo snap install wayback
Via APT:
curl -fsSL https://repo.wabarc.eu.org/apt/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/packages.wabarc.gpg
echo "deb [arch=amd64,arm64,armhf signed-by=/usr/share/keyrings/packages.wabarc.gpg] https://repo.wabarc.eu.org/apt/ /" | sudo tee /etc/apt/sources.list.d/wayback.list
sudo apt update
sudo apt install wayback
Via RPM:
sudo rpm --import https://repo.wabarc.eu.org/yum/gpg.key
sudo tee /etc/yum.repos.d/wayback.repo > /dev/null <<EOT
[wayback]
name=Wayback Archiver
baseurl=https://repo.wabarc.eu.org/yum/
enabled=1
gpgcheck=1
gpgkey=https://repo.wabarc.eu.org/yum/gpg.key
EOT
sudo dnf install -y wayback
Via Homebrew:
brew tap wabarc/wayback
brew install wayback
$ wayback -h
A command-line tool and daemon service for archiving webpages.
Usage:
wayback [flags]
Examples:
wayback https://www.wikipedia.org
wayback https://www.fsf.org https://www.eff.org
wayback --ia https://www.fsf.org
wayback --ia --is -d telegram -t your-telegram-bot-token
WAYBACK_SLOT=pinata WAYBACK_APIKEY=YOUR-PINATA-APIKEY \
WAYBACK_SECRET=YOUR-PINATA-SECRET wayback --ip https://www.fsf.org
Flags:
--chatid string Telegram channel id
-c, --config string Configuration file path, defaults: ./wayback.conf, ~/wayback.conf, /etc/wayback.conf
-d, --daemon strings Run as daemon service, supported services are telegram, web, mastodon, twitter, discord, slack, irc
--debug Enable debug mode (default mode is false)
-h, --help help for wayback
--ia Wayback webpages to Internet Archive
--info Show application information
--ip Wayback webpages to IPFS
--ipfs-host string IPFS daemon host, do not require, unless enable ipfs (default "127.0.0.1")
-m, --ipfs-mode string IPFS mode (default "pinner")
-p, --ipfs-port uint IPFS daemon port (default 5001)
--is Wayback webpages to Archive Today
--ph Wayback webpages to Telegraph
--print Show application configurations
-t, --token string Telegram Bot API Token
--tor Snapshot webpage via Tor anonymity network
--tor-key string The private key for Tor Hidden Service
-v, --version version for wayback
Wayback one or more url to Internet Archive and archive.today:
wayback https://www.wikipedia.org
wayback https://www.fsf.org https://www.eff.org
Wayback url to Internet Archive or archive.today or IPFS:
// Internet Archive
$ wayback --ia https://www.fsf.org
// archive.today
$ wayback --is https://www.fsf.org
// IPFS
$ wayback --ip https://www.fsf.org
For using IPFS, also can specify a pinning service:
$ export WAYBACK_SLOT=pinata
$ export WAYBACK_APIKEY=YOUR-PINATA-APIKEY
$ export WAYBACK_SECRET=YOUR-PINATA-SECRET
$ wayback --ip https://www.fsf.org
// or
$ WAYBACK_SLOT=pinata WAYBACK_APIKEY=YOUR-PINATA-APIKEY \
$ WAYBACK_SECRET=YOUR-PINATA-SECRET wayback --ip https://www.fsf.org
More details about pinning service.
With telegram bot:
wayback --ia --is --ip -d telegram -t your-telegram-bot-token
Publish message to your Telegram channel at the same time:
wayback --ia --is --ip -d telegram -t your-telegram-bot-token --chatid your-telegram-channel-name
Also can run with debug mode:
wayback -d telegram -t YOUR-BOT-TOKEN --debug
Both serve on Telegram and Tor hidden service:
wayback -d telegram -t YOUT-BOT-TOKEN -d web
URLs from file:
wayback url.txt
cat url.txt | wayback
By default, wayback
looks for configuration options from this files, the following are parsed:
./wayback.conf
~/wayback.conf
/etc/wayback.conf
Use the -c
/ --config
option to specify the build definition file to use.
You can also specify configuration options either via command flags or via environment variables, an overview of all options below.
Flags | Environment Variable | Default | Description |
---|---|---|---|
--debug | DEBUG | false | Enable debug mode, override LOG_LEVEL |
-c , --config | - | - | Configuration file path, defaults: ./wayback.conf , ~/wayback.conf , /etc/wayback.conf |
- | LOG_TIME | true | Display the date and time in log messages |
- | LOG_LEVEL | info | Log level, supported level are debug , info , warn , error , fatal , defaults to info |
- | ENABLE_METRICS | false | Enable metrics collector |
- | WAYBACK_LISTEN_ADDR | 0.0.0.0:8964 | The listen address for the HTTP server |
- | CHROME_REMOTE_ADDR | - | Chrome/Chromium remote debugging address, for screenshot, format: host:port , wss://domain.tld |
- | WAYBACK_POOLING_SIZE | 3 | Number of worker pool for wayback at once |
- | WAYBACK_BOLT_PATH | ./wayback.db | File path of bolt database |
- | WAYBACK_STORAGE_DIR | - | Directory to store binary file, e.g. PDF, html file |
- | WAYBACK_MAX_MEDIA_SIZE | 512MB | Max size to limit download stream media |
- | WAYBACK_MEDIA_SITES | - | Extra media websites wish to be supported, separate with comma |
- | WAYBACK_TIMEOUT | 300 | Timeout for single wayback request, defaults to 300 second |
- | WAYBACK_MAX_RETRIES | 2 | Max retries for single wayback request, defaults to 2 |
- | WAYBACK_USERAGENT | WaybackArchiver/1.0 | User-Agent for a wayback request |
- | WAYBACK_FALLBACK | off | Use Google cache as a fallback if the original webpage is unavailable |
- | WAYBACK_MEILI_ENDPOINT | - | Meilisearch API endpoint |
- | WAYBACK_MEILI_INDEXING | capsules | Meilisearch indexing name |
- | WAYBACK_MEILI_APIKEY | - | Meilisearch admin API key |
-d , --daemon | - | - | Run as daemon service, e.g. telegram , web , mastodon , twitter , discord |
--ia | WAYBACK_ENABLE_IA | true | Wayback webpages to Internet Archive |
--is | WAYBACK_ENABLE_IS | true | Wayback webpages to Archive Today |
--ip | WAYBACK_ENABLE_IP | false | Wayback webpages to IPFS |
--ph | WAYBACK_ENABLE_PH | false | Wayback webpages to Telegra.ph, required Chrome/Chromium |
--ipfs-host | WAYBACK_IPFS_HOST | 127.0.0.1 | IPFS daemon service host |
-p , --ipfs-port | WAYBACK_IPFS_PORT | 5001 | IPFS daemon service port |
-m , --ipfs-mode | WAYBACK_IPFS_MODE | pinner | IPFS mode for preserve webpage, e.g. daemon , pinner |
- | WAYBACK_IPFS_TARGET | web3storage | The IPFS pinning service is used to store files, supported pinners: infura, pinata, nftstorage, web3storage. |
- | WAYBACK_IPFS_APIKEY | - | Apikey of the IPFS pinning service |
- | WAYBACK_IPFS_SECRET | - | Secret of the IPFS pinning service |
- | WAYBACK_GITHUB_TOKEN | - | GitHub Personal Access Token, required the repo scope |
- | WAYBACK_GITHUB_OWNER | - | GitHub account name |
- | WAYBACK_GITHUB_REPO | - | GitHub repository to publish results |
- | WAYBACK_NOTION_TOKEN | - | Notion integration token |
- | WAYBACK_NOTION_DATABASE_ID | - | Notion database ID for archiving results |
-t , --token | WAYBACK_TELEGRAM_TOKEN | - | Telegram Bot API Token |
--chatid | WAYBACK_TELEGRAM_CHANNEL | - | The Telegram public/private channel id to publish archive result |
- | WAYBACK_TELEGRAM_HELPTEXT | - | The help text for Telegram command |
- | WAYBACK_MASTODON_SERVER | - | Domain of Mastodon instance |
- | WAYBACK_MASTODON_KEY | - | The client key of your Mastodon application |
- | WAYBACK_MASTODON_SECRET | - | The client secret of your Mastodon application |
- | WAYBACK_MASTODON_TOKEN | - | The access token of your Mastodon application |
- | WAYBACK_TWITTER_CONSUMER_KEY | - | The customer key of your Twitter application |
- | WAYBACK_TWITTER_CONSUMER_SECRET | - | The customer secret of your Twitter application |
- | WAYBACK_TWITTER_ACCESS_TOKEN | - | The access token of your Twitter application |
- | WAYBACK_TWITTER_ACCESS_SECRET | - | The access secret of your Twitter application |
- | WAYBACK_IRC_NICK | - | IRC nick |
- | WAYBACK_IRC_PASSWORD | - | IRC password |
- | WAYBACK_IRC_CHANNEL | - | IRC channel |
- | WAYBACK_IRC_SERVER | irc.libera.chat:6697 | IRC server, required TLS |
- | WAYBACK_MATRIX_HOMESERVER | https://matrix.org | Matrix homeserver |
- | WAYBACK_MATRIX_USERID | - | Matrix unique user ID, format: @foo:example.com |
- | WAYBACK_MATRIX_ROOMID | - | Matrix internal room ID, format: !bar:example.com |
- | WAYBACK_MATRIX_PASSWORD | - | Matrix password |
- | WAYBACK_DISCORD_BOT_TOKEN | - | Discord bot authorization token |
- | WAYBACK_DISCORD_CHANNEL | - | Discord channel ID, find channel ID |
- | WAYBACK_DISCORD_HELPTEXT | - | The help text for Discord command |
- | WAYBACK_SLACK_APP_TOKEN | - | App-Level Token of Slack app |
- | WAYBACK_SLACK_BOT_TOKEN | - | Bot User OAuth Token for Slack workspace, use User OAuth Token if requires create external link |
- | WAYBACK_SLACK_CHANNEL | - | Channel ID of Slack channel |
- | WAYBACK_SLACK_HELPTEXT | - | The help text for Slack slash command |
- | WAYBACK_NOSTR_RELAY_URL | wss://nostr.developer.li | Nostr relay server url, multiple separated by comma |
- | WAYBACK_NOSTR_PRIVATE_KEY | - | The private key of a Nostr account |
--tor | WAYBACK_USE_TOR | false | Snapshot webpage via Tor anonymity network |
--tor-key | WAYBACK_TOR_PRIVKEY | - | The private key for Tor Hidden Service |
- | WAYBACK_TOR_LOCAL_PORT | 8964 | Local port for Tor Hidden Service, also support for a reverse proxy. This is ignored if WAYBACK_LISTEN_ADDR is set. |
- | WAYBACK_TOR_REMOTE_PORTS | 80 | Remote ports for Tor Hidden Service, e.g. WAYBACK_TOR_REMOTE_PORTS=80,81 |
- | WAYBACK_SLOT | - | Pinning service for IPFS mode of pinner, see ipfs-pinner |
- | WAYBACK_APIKEY | - | API key for pinning service |
- | WAYBACK_SECRET | - | API secret for pinning service |
If both of the definition file and environment variables are specified, they are all will be read and apply, and preferred from the environment variable for the same item.
Prints the resulting options of the targets with --print
, in a Go struct with type, without running the wayback
.
docker pull wabarc/wayback
docker run -d wabarc/wayback wayback -d telegram -t YOUR-BOT-TOKEN # without telegram channel
docker run -d wabarc/wayback wayback -d telegram -t YOUR-BOT-TOKEN -c YOUR-CHANNEL-USERNAME # with telegram channel
For a comprehensive guide, please refer to the complete documentation.
We encourage all contributions to this repository! Open an issue! Or open a Pull Request!
If you're interested in contributing to wayback
itself, read our contributing guide to get started.
Note: All interaction here should conform to the Code of Conduct.
Supported Golang version: See .github/workflows/testing.yml
Author: Wabarc
Source Code: https://github.com/wabarc/wayback
License: GPL-3.0 license
1679490561
Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.
A fresh VM running any of the following operating systems:
An SSH keypair that can be used for application deployment. If this exists before installation, it will be automatically imported into dokku. Otherwise, you will need to import the keypair manually after installation using dokku ssh-keys:add
.
To install the latest stable release, run the following commands as a user who has access to sudo
:
wget https://dokku.com/install/v0.30.2/bootstrap.sh
sudo DOKKU_TAG=v0.30.2 bash bootstrap.sh
You can then proceed to configure your server domain (via dokku domains:set-global
) and user access (via dokku ssh-keys:add
) to complete the installation.
If you wish for a more unattended installation method, see these docs.
View the docs for upgrading from an older version of Dokku.
Full documentation - including advanced installation docs - are available online at https://dokku.com/docs/getting-started/installation/.
You can use GitHub Issues, check Troubleshooting in the documentation, or join us on Gliderlabs Slack in the #dokku channel.
After checking GitHub Issues, the Troubleshooting Guide or having a chat with us on Gliderlabs Slack in the #dokku channel, feel free to fork and create a Pull Request.
While we may not merge your PR as is, they serve to start conversations and improve the general Dokku experience for all users.
Author: Dokku
Source Code: https://github.com/dokku/dokku
License: MIT license
1678097580
When a developer creates an application, the next step is to share it with friends or the public so that everyone can access it. That process of transferring code from a development environment to a hosting platform where it is served to end users is called deployment.
Hosting used to be pretty inefficient before cloud hosting platforms like Heroku came around. It was mainly done by hosting providers who required uploading all static assets (build files generated by running npm run build
) every time we make a change. There was no other way to upload static files other than some sort of FTP interface (either a local one or on the hosting server), which can be pretty stressful and technical.
In this guide, we'll take a look at how to deploy a React application to Heroku using the CLI (Command Line Interface) via Heroku Git. Also, we will take a look at how to redeploy code when we make some changes to our application.
Heroku is a container-based cloud platform that enables developers to easily deploy, manage, and scale modern applications. This allows developers to focus on their core job - creating great apps that delight and engage users. In other words, Heroku increases the developer's productivity by making app deployment, scaling, and management as simple as possible.
There are numerous reasons why we should use Heroku:
In this guide, we will deploy a movies search app, which is a simple React app that searches an API for movies. Before we begin, you should sign up for Heroku if you do not already have an account, as this is where we will deploy our React application. We can go to Heroku.com and sign up by clicking the sign-up button in the upper right corner. The signup pipeline is pretty much the standard one, so you shouldn't have any trouble creating an account on Heroku:
When you've created a Heroku account, we can proceed to the actual deployment of our app.
Note: Previously, there was an option to deploy via GitHub Integration, but that feature has been revoked due to a security breach. The best way to deploy to Heroku as of now is via Heroku Git, which happens in our CLI (Command Line Interface).
Heroku uses the Git version control system to manage app deployments. It is important to note that we do not need to be Git experts to deploy our React application to Heroku, all we need to know are some fundamentals, which will be covered in this guide.
If you're not confident with Git - don't worry. We'll cover everything you need to know. Otherwise, check out our free course on Git: Git Essentials: Developer's Guide to Git
As the name Heroku Git implies, we will be using Git, which means we need to have Git installed. The same applies to the Heroku CLI. If you don't have those two installed, you can follow the following instructions to guide you through the installation process:
After successfully installing them, we can proceed to create an app on Heroku, to which our React application will be deployed later. We can create an application on Heroku in two ways - via the terminal (CLI command) or manually on our Heroku dashboard.
Note: A common misconception is that Git and GitHub are the same things, but they are not! Git is a version control system used by many apps and services, including but not limited to GitHub. Therefore you don’t need to push your code to GitHub, nor have a GitHub account to use Heroku.
Let’s first see how we can create an app using the Heroku dashboard. The first step is to click the create new app button:
This would redirect us to a page where we need to fill up the information about the app we want to create:
Note: Make sure you remember the name of the app you created on Heroku because we will be connecting our local repository to this remote repository soon.
Once this process is completed, we can start deploying our app from a local environment to Heroku. But, before we take a look at how to deploy an app, let's consider an alternative approach to creating a Heroku app - using the Heroku CLI.
Alternatively, you can create an app on Heroku using the CLI. Heroku made sure this is as straightforward as possible. The only thing you need to do is to run the following command in your terminal of choice (just make sure to replace <app-name>
with the actual name of your app):
$ heroku create -a <app-name>
Note: If you run this command from the app’s root directory, the empty Heroku Git repository is automatically set as a remote for our local repository.
The first step before pushing the code to Heroku will be to position yourself in the root directory of your app (in the terminal). Then use the heroku login
command to log into the Heroku dashboard. After that, you need to accept Heroku's terms and conditions and, finally, log in to Heroku using your login credentials:
You will be returned to the terminal afterward, so you can continue the process of deploying to Heroku. Now, you should initialize the repository:
$ git init
And then register the app we created earlier on Heroku as the remote repository for the local one we initialized in the previous step:
$ heroku git:remote -a <app-name>
Note: Make sure to replace
<app-name>
with the name of the app we've created on Heroku earlier (e.g. movies-search-app
).
Now we can proceed to deploy our application. But, since we need to deploy a React application, we first need to add the React buildpack:
$ heroku buildpacks:set mars/create-react-app
Once that is completed, the next step is to actually push our code to the remote repository we've created on Heroku. The first step is to stage our files, commit them, and finally push them to the remote repository:
$ git commit -am "my commit"
$ git push heroku main
Note: Suppose we want to switch our branch from
main
to development
. We can run the following command: git checkout -b development
.
Once we have successfully pushed to Heroku, we can open our newly deployed app in our browser:
$ heroku open
The next question you'd probably have is how to redeploy the app after we make changes to it. This works similarly to how it does in any Git-based platform - all we have to do is stage the files, commit, and then push the code to Heroku:
$ git commit -am "added changes"
$ git push heroku main
Heroku automatically picks this change up and applies it to the live application.
Heroku can be a fairly useful tool for deploying your React app. In this article, we've taken a look at how to deploy a React application to Heroku using Heroku Git. Additionally, we've gone over some basic Git commands you would need when working with Heroku Git, and, finally, we've discussed how to redeploy an app after you make changes to it.
Original article source at: https://stackabuse.com
1677727920
The main purpose of this repository is to build a good project setup and workflow for writing a Node api rest in TypeScript using KOA and an SQL DB.
Koa is a new web framework designed by the team behind Express, which aims to be a smaller, more expressive, and more robust foundation for web applications and APIs. Through leveraging generators Koa allows you to ditch callbacks and greatly increase error-handling. Koa does not bundle any middleware within core, and provides an elegant suite of methods that make writing servers fast and enjoyable.
Through Github Actions CI, this boilerplate is deployed here! You can try to make requests to the different defined endpoints and see how it works. The following Authorization header will have to be set (already signed with the boilerplate's secret) to pass the JWT middleware:
HEADER (DEMO)
Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjEiLCJuYW1lIjoiSmF2aWVyIEF2aWxlcyIsImVtYWlsIjoiYXZpbGVzbG9wZXouamF2aWVyQGdtYWlsLmNvbSJ9.7oxEVGy4VEtaDQyLiuoDvzdO0AyrNrJ_s9NU3vko5-k
AVAILABLE ENDPOINTS DEMO SWAGGER DOCS DEMO
When running the project locally with watch-server
, being .env
file config the very same as .example.env
file, the swagger docs will be deployed at: http:localhost:3000/swagger-html
, and the bearer token for authorization should be as follows:
HEADER (LOCALHOST BASED ON DEFAULT SECRET KEY 'your-secret-whatever')
Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjEiLCJuYW1lIjoiSmF2aWVyIEF2aWxlcyIsImVtYWlsIjoiYXZpbGVzbG9wZXouamF2aWVyQGdtYWlsLmNvbSJ9.rgOobROftUYSWphkdNfxoN2cgKiqNXd4Km4oz6Ex4ng
method | resource | description |
---|---|---|
GET | / | Simple hello world response |
GET | /users | returns the collection of users present in the DB |
GET | /users/:id | returns the specified id user |
POST | /users | creates a user in the DB (object user to be includued in request's body) |
PUT | /users/:id | updates an already created user in the DB (object user to be includued in request's body) |
DELETE | /users/:id | deletes a user from the DB (JWT token user ID must be the same as the user you want to delete) |
To build and run this app locally you will need:
git clone --depth=1 https://github.com/javieraviles/node-typescript-koa-rest.git <project_name>
cd <project_name>
npm install
npm run watch-server
npm run build
npm run start
npm run test:integration:local (newman needed)
npm run test:load (locust needed)
npm run test
npm run test:coverage
npm run test:watch
A docker-compose file has been added to the project with a postgreSQL (already setting user, pass and dbname as the ORM config is expecting) and an ADMINER image (easy web db client).
It is as easy as go to the project folder and execute the command 'docker-compose up' once you have Docker installed, and both the postgreSQL server and the Adminer client will be running in ports 5432 and 8080 respectively with all the config you need to start playing around.
If you use Docker natively, the host for the server which you will need to include in the ORM configuration file will be localhost, but if you were to run Docker in older Windows versions, you will be using Boot2Docker and probably your virtual machine will use your ip 192.168.99.100 as network adapter (if not, command docker-machine ip
will tell you). This mean your database host will be the aforementioned ip and in case you want to access the web db client you will also need to go to http://192.168.99.100/8080
This API is prepared to work with an SQL database, using TypeORM. In this case we are using postgreSQL, and that is why in the package.json 'pg' has been included. If you where to use a different SQL database remember to install the correspondent driver.
The ORM configuration and connection to the database can be specified in the file 'ormconfig.json'. Here is directly in the connection to the database in 'server.ts' file because a environment variable containing databaseUrl is being used to set the connection data. This is prepared for Heroku, which provides a postgres-string-connection as env variable. In local is being mocked with the docker local postgres as can be seen in ".example.env"
It is importante to notice that, when serving the project directly with *.ts files using ts-node,the configuration for the ORM should specify the *.ts files path, but once the project is built (transpiled) and run as plain js, it will be needed to change it accordingly to find the built js files:
"entities": [
"dist/entity/**/*.js"
],
"migrations": [
"dist/migration/**/*.js"
],
"subscribers": [
"dist/subscriber/**/*.js"
]
**NOTE: this is now automatically handled by the NODE_ENV variable too.
Notice that if NODE_ENV is set to development, the ORM config won't be using SSL to connect to the DB. Otherwise it will.
And because Heroku uses self-signed certificates, this bit has been added, please take it out if connecting to a local DB without SSL.
createConnection({
...
extra: {
ssl: {
rejectUnauthorized: false // Heroku uses self signed certificates
}
}
})
You can find an implemented CRUD of the entity user in the correspondent controller controller/user.ts and its routes in routes.ts file.
This project uses the library class-validator, a decorator-based entity validation, which is used directly in the entities files as follows:
export class User {
@Length(10, 100) // length of string email must be between 10 and 100 characters
@IsEmail() // the string must comply with an standard email format
@IsNotEmpty() // the string can't be empty
email: string;
}
Once the decorators have been set in the entity, you can validate from anywhere as follows:
const user = new User();
user.email = "avileslopez.javier@gmail"; // should not pass, needs the ending .com to be a valid email
validate(user).then(errors => { // errors is an array of validation errors
if (errors.length > 0) {
console.log("validation failed. errors: ", errors); // code will get here, printing an "IsEmail" error
} else {
console.log("validation succeed");
}
});
For further documentation regarding validations see class-validator docs.
Create a .env file (or just rename the .example.env) containing all the env variables you want to set, dotenv library will take care of setting them. This project is using three variables at the moment:
TypeScript itself is simple to add to any project with npm
.
npm install -D typescript
If you're using VS Code then you're good to go! VS Code will detect and use the TypeScript version you have installed in your node_modules
folder. For other editors, make sure you have the corresponding TypeScript plugin.
The most obvious difference in a TypeScript + Node project is the folder structure. TypeScript (.ts
) files live in your src
folder and after compilation are output as JavaScript (.js
) in the dist
folder.
The full folder structure of this app is explained below:
Note! Make sure you have already built the app using
npm run build
Name | Description |
---|---|
dist | Contains the distributable (or output) from your TypeScript build. This is the code you ship |
node_modules | Contains all your npm dependencies |
src | Contains your source code that will be compiled to the dist dir |
src/server.ts | Entry point to your KOA app |
.github/workflows/ci.yml | Github actions CI configuration |
loadtests/locustfile.py | Locust load tests |
integrationtests/node-koa-typescript.postman_collection.json | Postman integration test collection |
.copyStaticAssets.ts | Build script that copies images, fonts, and JS libs to the dist folder |
package.json | File that contains npm dependencies as well as build scripts |
docker-compose.yml | Docker PostgreSQL and Adminer images in case you want to load the db from Docker |
tsconfig.json | Config settings for compiling server code written in TypeScript |
.eslintrc and .eslintignore | Config settings for ESLint code style checking |
.example.env | Env variables file example to be renamed to .env |
Dockerfile and dockerignore | The app is dockerized to be deployed from CI in a more standard way, not needed for dev |
TypeScript uses the file tsconfig.json
to adjust project compile options. Let's dissect this project's tsconfig.json
, starting with the compilerOptions
which details how your project is compiled.
"compilerOptions": {
"module": "commonjs",
"target": "es2017",
"lib": ["es6"],
"noImplicitAny": true,
"strictPropertyInitialization": false,
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist",
"baseUrl": ".",
"experimentalDecorators": true,
"emitDecoratorMetadata": true,
}
},
compilerOptions | Description |
---|---|
"module": "commonjs" | The output module type (in your .js files). Node uses commonjs, so that is what we use |
"target": "es2017" | The output language level. Node supports ES2017, so we can target that here |
"lib": ["es6"] | Needed for TypeORM. |
"noImplicitAny": true | Enables a stricter setting which throws errors when something has a default any value |
"moduleResolution": "node" | TypeScript attempts to mimic Node's module resolution strategy. Read more here |
"sourceMap": true | We want source maps to be output along side our JavaScript. |
"outDir": "dist" | Location to output .js files after compilation |
"baseUrl": "." | Part of configuring module resolution. |
paths: {...} | Part of configuring module resolution. |
"experimentalDecorators": true | Needed for TypeORM. Allows use of @Decorators |
"emitDecoratorMetadata": true | Needed for TypeORM. Allows use of @Decorators |
The rest of the file define the TypeScript project context. The project context is basically a set of options that determine which files are compiled when the compiler is invoked with a specific tsconfig.json
. In this case, we use the following to define our project context:
"include": [
"src/**/*"
]
include
takes an array of glob patterns of files to include in the compilation. This project is fairly simple and all of our .ts files are under the src
folder. For more complex setups, you can include an exclude
array of glob patterns that removes specific files from the set defined with include
. There is also a files
option which takes an array of individual file names which overrides both include
and exclude
.
All the different build steps are orchestrated via npm scripts. Npm scripts basically allow us to call (and chain) terminal commands via npm. This is nice because most JavaScript tools have easy to use command line utilities allowing us to not need grunt or gulp to manage our builds. If you open package.json
, you will see a scripts
section with all the different scripts you can call. To call a script, simply run npm run <script-name>
from the command line. You'll notice that npm scripts can call each other which makes it easy to compose complex builds out of simple individual build scripts. Below is a list of all the scripts this template has available:
Npm Script | Description |
---|---|
start | Does the same as 'npm run serve'. Can be invoked with npm start |
build | Full build. Runs ALL build tasks (build-ts , lint , copy-static-assets ) |
serve | Runs node on dist/server/server.js which is the apps entry point |
watch-server | Nodemon, process restarts if crashes. Continuously watches .ts files and re-compiles to .js |
build-ts | Compiles all source .ts files to .js files in the dist folder |
lint | Runs ESLint check and fix on project files |
copy-static-assets | Calls script that copies JS libs, fonts, and images to dist directory |
test:integration:<env> | Execute Postman integration tests collection using newman on any env (local or heroku ) |
test:load | Execute Locust load tests using a specific configuration |
CI: Github Actions
Using Github Actions a pipeline is deploying the application in Heroku and running tests against it, checking the application is healthy deployed. The pipeline can be found at /.github/workflows/test.yml
. This performs the following:
ESLint
Since TSLint is deprecated now, ESLint feels like the way to go as also supports typescript. ESLint is a static code analysis tool for identifying problematic patterns found in JavaScript/typescript code.
Like most linters, ESLint has a wide set of configurable rules as well as support for custom rule sets. All rules are configured through .eslintrc
. In this project, we are using a fairly basic set of rules with no additional custom rules.
Like the rest of our build steps, we use npm scripts to invoke ESLint. To run ESLint you can call the main build script or just the ESLint task.
npm run build // runs full build including ESLint format check
npm run lint // runs ESLint check + fix
Notice that ESLint is not a part of the main watch task. It can be annoying for ESLint to clutter the output window while in the middle of writing a function, so I elected to only run it only during the full build. If you are interested in seeing ESLint feedback as soon as possible, I strongly recommend the ESLint extension in VS Code.
Register cron jobs
Cron dependency has been added to the project together with types. A cron.ts
file has been created where a cron job is created using a cron expression configured in config.ts
file.
import { CronJob } from 'cron';
import { config } from './config';
const cron = new CronJob(config.cronJobExpression, () => {
console.log('Executing cron job once every hour');
});
export { cron };
From the server.ts
, the cron job gets started:
import { cron } from './cron';
// Register cron job to do any action needed
cron.start();
Integrations and load tests
Integrations tests are a Postman collection with assertions, which gets executed using Newman from the CI (Github Actions). It can be found at /integrationtests/node-koa-typescript.postman_collection.json
; it can be opened in Postman and get modified very easily. Feel free to install Newman in your local environment and trigger npm run test:integration:local
command which will use local environment file (instead of heroku dev one) to trigger your postman collection faster than using postman.
Load tests are a locust file with assertions, which gets executed from the CI (Github Actions). It can be found at /loadtests/locustfile.py
; It is written in python and can be executed locally against any host once python and locust are installed on your dev machine.
**NOTE: at the end of load tests, an endpoint to remove all created test users is called.
Logging
Winston is designed to be a simple and universal logging library with support for multiple transports.
A "logger" middleware passing a winstonInstance has been created. Current configuration of the logger can be found in the file "logger.ts". It will log 'error' level to an error.log file and 'debug' or 'info' level (depending on NODE_ENV environment variable, debug if == development) to the console.
// Logger middleware -> use winston as logger (logger.ts with config)
app.use(logger(winston));
Authentication - Security
The idea is to keep the API as clean as possible, therefore the auth will be done from the client using an auth provider such as Auth0. The client making requests to the API should include the JWT in the Authorization header as "Authorization: Bearer ". HS256 will be used as the secret will be known by both your api and your client and will be used to sign the token, so make sure you keep it hidden.
As can be found in the server.ts file, a JWT middleware has been added, passing the secret from an environment variable. The middleware will validate that every request to the routes below, MUST include a valid JWT signed with the same secret. The middleware will set automatically the payload information in ctx.state.user.
// JWT middleware -> below this line, routes are only reached if JWT token is valid, secret as env variable
app.use(jwt({ secret: config.jwtSecret }));
Go to the website https://jwt.io/ to create JWT tokens for testing/debugging purposes. Select algorithm HS256 and include the generated token in the Authorization header to pass through the jwt middleware.
Custom 401 handling -> if you don't want to expose koa-jwt errors to users:
app.use(function(ctx, next){
return next().catch((err) => {
if (401 == err.status) {
ctx.status = 401;
ctx.body = 'Protected resource, use Authorization header to get access\n';
} else {
throw err;
}
});
});
If you want to authenticate from the API, and you fancy the idea of an auth provider like Auth0, have a look at jsonwebtoken — JSON Web Token signing and verification
This boilerplate uses @koa/cors, a simple CORS middleware for koa. If you are not sure what this is about, click here.
// Enable CORS with default options
app.use(cors());
Have a look at Official @koa/cors docs in case you want to specify 'origin' or 'allowMethods' properties.
This boilerplate uses koa-helmet, a wrapper for helmet to work with koa. It provides important security headers to make your app more secure by default.
Usage is the same as helmet. Helmet offers 11 security middleware functions (clickjacking, DNS prefetching, Security Policy...), everything is set by default here.
// Enable helmet with default options
app.use(helmet());
Have a look at Official koa-helmet docs in case you want to customize which security middlewares are enabled.
Dependencies
Dependencies are managed through package.json
. In that file you'll find two sections:
Package | Description |
---|---|
dotenv | Loads environment variables from .env file. |
koa | Node web framework. |
koa-bodyparser | A bodyparser for koa. |
koa-jwt | Middleware to validate JWT tokens. |
@koa/router | Router middleware for koa. |
koa-helmet | Wrapper for helmet, important security headers to make app more secure |
@koa/cors | Cross-Origin Resource Sharing(CORS) for koa |
pg | PostgreSQL driver, needed for the ORM. |
reflect-metadata | Used by typeORM to implement decorators. |
typeorm | A very cool SQL ORM. |
winston | Logging library. |
class-validator | Decorator based entities validation. |
koa-swagger-decorator | using decorator to automatically generate swagger doc for koa-router. |
cron | Register cron jobs in node. |
Package | Description |
---|---|
@types | Dependencies in this folder are .d.ts files used to provide types |
nodemon | Utility that automatically restarts node process when it crashes |
ts-node | Enables directly running TS files. Used to run copy-static-assets.ts |
eslint | Linter for Javascript/TypeScript files |
typescript | JavaScript compiler/type checker that boosts JavaScript productivity |
shelljs | Portable Unix shell commands for Node.js |
To install or update these dependencies you can use npm install
or npm update
.
TSLint
(deprecated already) to ESLint
10.x.x
to 12.0.0
(LTS)package-lock.json
using npm ci
(Beyond guaranteeing you that you'll only get what is in your lock-file it's also much faster (2x-10x!) than npm install when you don't start with a node_modules).koa-router
deprecated, using new fork from koa team @koa/router
npm run watch-server
is now working properly live-reloading changes in the code Issue 39.Author: javieraviles
Source Code: https://github.com/javieraviles/node-typescript-koa-rest
License: MIT license
1676405460
This crawler automates the following step:
# upload pdf to googledrive, store data and notify via email
python script/spider.py -c config/prod.cfg -u googledrive -s firebase -n gmail
# download all format
python script/spider.py --config config/prod.cfg --all
# download only one format: pdf|epub|mobi
python script/spider.py --config config/prod.cfg --type pdf
# download also additional material: source code (if exists) and book cover
python script/spider.py --config config/prod.cfg -t pdf --extras
# equivalent (default is pdf)
python script/spider.py -c config/prod.cfg -e
# download and then upload to Google Drive (given the download url anyone can download it)
python script/spider.py -c config/prod.cfg -t epub --upload googledrive
python script/spider.py --config config/prod.cfg --all --extras --upload googledrive
# download and then upload to OneDrive (given the download url anyone can download it)
python script/spider.py -c config/prod.cfg -t epub --upload onedrive
python script/spider.py --config config/prod.cfg --all --extras --upload onedrive
# download and notify: gmail|ifttt|join|pushover
python script/spider.py -c config/prod.cfg --notify gmail
# only claim book (no downloads):
python script/spider.py -c config/prod.cfg --notify gmail --claimOnly
Before you start you should
python --version
git clone https://github.com/niqdev/packtpub-crawler.git
pip install -r requirements.txt
(see also virtualenv)cp config/prod_example.cfg config/prod.cfg
[credential]
credential.email=PACKTPUB_EMAIL
credential.password=PACKTPUB_PASSWORD
Now you should be able to claim and download your first eBook
python script/spider.py --config config/prod.cfg
From the documentation, Google Drive API requires OAuth2.0 for authentication, so to upload files you should:
config/client_secrets.json
[googledrive]
...
googledrive.client_secrets=config/client_secrets.json
googledrive.gmail=GOOGLE_DRIVE@gmail.com
Now you should be able to upload your eBook to Google Drive
python script/spider.py --config config/prod.cfg --upload googledrive
Only the first time you will be prompted to login in a browser which has javascript enabled (no text-based browser) to generate config/auth_token.json
. You should also copy and paste in the config the FOLDER_ID, otherwise every time a new folder with the same name will be created.
[googledrive]
...
googledrive.default_folder=packtpub
googledrive.upload_folder=FOLDER_ID
Documentation: OAuth, Quickstart, example and permissions
From the documentation, OneDrive API requires OAuth2.0 for authentication, so to upload files you should:
[onedrive]
...
onedrive.client_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
onedrive.client_secret=XxXxXxXxXxXxXxXxXxXxXxX
Now you should be able to upload your eBook to OneDrive
python script/spider.py --config config/prod.cfg --upload onedrive
Only the first time you will be prompted to login in a browser which has javascript enabled (no text-based browser) to generate config/session.onedrive.pickle
.
[onedrive]
...
onedrive.folder=packtpub
Documentation: Registration, Python API
To upload your eBook via scp
on a remote server update the configs
[scp]
scp.host=SCP_HOST
scp.user=SCP_USER
scp.password=SCP_PASSWORD
scp.path=SCP_UPLOAD_PATH
Now you should be able to upload your eBook
python script/spider.py --config config/prod.cfg --upload scp
Note:
scp.path
on the remote server must exists in advance--upload scp
is incompatible with --store
and --notify
Create a new Firebase project, copy the database secret from your settings
https://console.firebase.google.com/project/PROJECT_NAME/settings/database
and update the configs
[firebase]
firebase.database_secret=DATABASE_SECRET
firebase.url=https://PROJECT_NAME.firebaseio.com
Now you should be able to store your eBook details on Firebase
python script/spider.py --config config/prod.cfg --upload googledrive --store firebase
To send a notification via email using Gmail you should:
[gmail]
...
gmail.username=EMAIL_USERNAME@gmail.com
gmail.password=EMAIL_PASSWORD
gmail.from=FROM_EMAIL@gmail.com
gmail.to=TO_EMAIL_1@gmail.com,TO_EMAIL_2@gmail.com
Now you should be able to notify your accounts
python script/spider.py --config config/prod.cfg --notify gmail
[ifttt]
ifttt.event_name=packtpub-crawler
ifttt.key=IFTTT_MAKER_KEY
Now you should be able to trigger the applet
python script/spider.py --config config/prod.cfg --notify ifttt
Value mappings:
[join]
join.device_ids=DEVICE_IDS_COMMA_SEPARATED_OR_GROUP_NAME
join.api_key=API_KEY
Now you should be able to trigger the event
python script/spider.py --config config/prod.cfg --notify join
[pushover]
pushover.user_key=PUSHOVER_USER_KEY
pushover.api_key=PUSHOVER_API_KEY
Create a new branch
git checkout -b heroku-scheduler
Update the .gitignore
and commit your changes
# remove config/prod.cfg config/client_secrets.json config/auth_token.json # add dev/ config/dev.cfg config/prod_example.cfg
Create, config and deploy the scheduler
heroku login # create a new app heroku create APP_NAME --region eu # or if you already have an existing app heroku git:remote -a APP_NAME # deploy your app git push -u heroku heroku-scheduler:master heroku ps:scale clock=1 # useful commands heroku ps heroku logs --ps clock.1 heroku logs --tail heroku run bash
Update script/scheduler.py
with your own preferences.
More info about Heroku Scheduler, Clock Processes, Add-on and APScheduler
Build your image
docker build -t niqdev/packtpub-crawler:2.4.0 .
Run manually
docker run \
--rm \
--name my-packtpub-crawler \
niqdev/packtpub-crawler:2.4.0 \
python script/spider.py --config config/prod.cfg
Run scheduled crawler in background
docker run \
--detach \
--name my-packtpub-crawler \
niqdev/packtpub-crawler:2.4.0
# useful commands
docker exec -i -t my-packtpub-crawler bash
docker logs -f my-packtpub-crawler
Alternatively you can pull from Docker Hub this fork
docker pull kuchy/packtpub-crawler
Add this to your crontab to run the job daily at 9 AM:
crontab -e
00 09 * * * cd PATH_TO_PROJECT/packtpub-crawler && /usr/bin/python script/spider.py --config config/prod.cfg >> /tmp/packtpub.log 2>&1
Create two files in /etc/systemd/system:
[Unit]
Description=run packtpub-crawler
[Service]
User=USER_THAT_SHOULD_RUN_THE_SCRIPT
ExecStart=/usr/bin/python2.7 PATH_TO_PROJECT/packtpub-crawler/script/spider.py -c config/prod.cfg
[Install]
WantedBy=multi-user.target
[Unit]
Description=Runs packtpub-crawler every day at 7
[Timer]
OnBootSec=10min
OnActiveSec=1s
OnCalendar=*-*-* 07:00:00
Unit=packtpub_crawler.service
Persistent=true
[Install]
WantedBy=multi-user.target
Enable the script with sudo systemctl enable packtpub_crawler.timer
. You can test the service with sudo systemctl start packtpub_crawler.timer
and see the output with sudo journalctl -u packtpub_crawler.service -f
.
The script downloads also the free ebooks from the weekly packtpub newsletter. The URL is generated by a Google Apps Script which parses all the mails. You can get the code here, if you want to see the actual script, please clone the spreadsheet and go to Tools > Script editor...
.
To use your own source, modify in the config
url.bookFromNewsletter=https://goo.gl/kUciut
The URL should point to a file containing only the URL (no semicolons, HTML, JSON, etc).
You can also clone the spreadsheet to use your own Gmail account. Subscribe to the newsletter (on the bottom of the page) and create a filter to tag your mails accordingly.
Install paramiko with sudo -H pip install paramiko --ignore-installed
Install missing dependencies as described here
# install pip + setuptools
curl https://bootstrap.pypa.io/get-pip.py | python -
# upgrade pip
pip install -U pip
# install virtualenv globally
sudo pip install virtualenv
# create virtualenv
virtualenv env
# activate virtualenv
source env/bin/activate
# verify virtualenv
which python
python --version
# deactivate virtualenv
deactivate
Run a simple static server with
node dev/server.js
and test the crawler with
python script/spider.py --dev --config config/dev.cfg --all
This project is just a Proof of Concept and not intended for any illegal usage. I'm not responsible for any damage or abuse, use it at your own risk.
Author: Niqdev
Source Code: https://github.com/niqdev/packtpub-crawler
License: MIT license
1676177520
Heroku is a popular platform-as-a-service (PaaS) that allows developers to deploy and run applications on the cloud. It supports multiple programming languages, including Go, making it easy for developers to build and deploy their Go applications. In this article, we will discuss how to use Heroku to deploy Golang application.
To deploy Golang app on Heroku cloud platform, follow the steps mentioned below. Before you begin, here are the pre-requirements that you must consider.
Prerequisites before using Heroku to deploy Golang application
To get started, you need to have a Go development environment set up on your local machine. You can download and install Go from the official website.
Go Modules is a package management system for Go that provides versioning and dependency management. By using Go Modules, developers can easily manage their dependencies and ensure that their applications run smoothly on different systems, without the need to set up a GOPATH environment variable.
To create a Go application with Go Modules, you need to initialize a new Go Modules project. You can do this by running the following command in the terminal:
go mod init <module-name>
Next, you can create a simple Go application, such as a basic Hello World program. The code for this program is as follows:
package main
import (
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello World!")
})
http.ListenAndServe(":8080", nil)
}
Now comes the essential part of Golang hosting.
To deploy your application on Heroku, you need to create a Heroku account. Sign-up for a free account on the official website.
To verify your Heroku account, you can follow these steps:
1. Log in to your Heroku account.
2. Go to the “Account Settings” page.
3. Click on the “Verify Account” button.
4. Follow the steps to provide and verify your personal information, including your full name, address, and phone number.
5. Provide payment information to verify your account, which can be a credit card or PayPal account.
The Heroku CLI (Command Line Interface) allows you to manage and deploy your applications from the terminal. To install the Heroku CLI, follow the instructions below
1. Go to the Heroku CLI download page: Heroku CLI.
2. Select your operating system (Windows, MacOS, or Linux) and follow the instructions to download and install the Heroku CLI.
3. Open a terminal or command prompt window & type the command:
heroku login
4. Enter your Heroku credentials to log in.
To create a new Heroku application, run the following command in the terminal:
heroku create
To use Heroku to deploy Golang application, you need to create a Procfile file that specifies the command to run the application. The contents of the Procfile file should be as follows:
web: go run main.go
Next, add the files in your Go application to a Git repository and push the repository to Heroku using the following commands:
git init
git add .
git commit -m "Initial commit"
git push heroku master
After the application is successfully deployed, you can launch it by running the following command:
heroku open
The Go application was deployed to Heroku and opened using the heroku open command. The application’s output, “Hello, World!” is displayed in a web browser. The URL of the application on Heroku is also shown in the web browser’s address bar.
You are successful with Heroku Golang now. Your Go application is now running on Heroku. You can access it through the URL in the terminal as an output of the above heroku open command.
Heroku is a convenient platform for deploying and running Go applications. Its ease of use and support for multiple programming languages make it a popular choice for many developers. Additionally, Heroku Go combination offers a variety of tools and services that make it simple to manage and scale your applications as they grow. Whether you are a beginner or an experienced developer, Heroku is an excellent option for hosting your Go applications.
We hope you found the tutorial to use Heroku to deploy Golang application. For more such valuable lessons, find our Golang tutorials.
Original article source at: https://www.bacancytechnology.com/
1675684083
For any developer, the most satisfying thing is to make their development available to each individual after building it as a reference source. So, after locally previewing and developing a Rails application on your system, the following step is to put it online so that others can observe it. This is called deploying the application. Now here comes Heroku.
It allows you to deploy your Ruby on Rails application quickly and is prominent for learners because it’s an open-source and “effortless” push-to-deploy system. Concisely, Heroku handles pretty much everything for the individual. Let us check how you can deploy Ruby on Rails application on Heroku with the following steps.
To publish your app on the cloud, here are the steps you need to follow. Deploying Ruby on Rails app to Heroku platform as a service is not that tricky. This guide will show you how to begin with your RoR app from local server to deploying it on Heroku.
1. Create a new Heroku account.
2. Install the Heroku CLI on your machine.
$ sudo snap install --classic heroku
3. After installation, the heroku command is now available in your system. Use your Heroku account credentials to log in.
admin1@admin1-Latitude-3510:~$ heroku login
heroku: Press any key to open up the browser to login or q to exit:
4. Create a new ssh key if not available otherwise, press Enter instantly to upload the existing ssh key used for pushing the code later.
$ heroku login
heroku: Enter your Heroku credentials Email: schneems@example.com
Password:
Could not find an existing public key.
Would you like to generate one? [Yn]
Generating new SSH public key.
Uploading ssh public key /Users/adam/.ssh/id_rsa.pub
Fire the following commands to create a rails application.
rails new app -d postgresql
cd app
bundle a tailwindcss-rails
rails tailwindcss:install
Disclaimer:- Here we are using Ruby2.7.2 and Rails 6.1.7 running on Ubuntu 22.04.1
Working with Ruby is entertaining, but you can’t deploy the application running on SQLite3 on Heroku. PostgreSQL is the practical standard for databases on Heroku. Add the gem ‘pg’ if you’re using your current RoR application.
gem 'sqlite3'
To this:
gem 'pg'
Note:- During development PostgreSQL is the highly approved database to use. Because of differences between your development and deployment environments, an analogy is maintained which helps to prevent devious bugs from being established in the application. Install Postgres locally if it is yet to be available on your system.
In Gemfile, add rails_12factor gem if you use former Rails versions to authorize static asset serving and logging on Heroku.
gem 'rails_12factor', group: :production
During deploying a new application, the rails_12factor gem is not needed. But if you are upgrading an already existing application, you can remove the rails_12factor gem provided you have the proper configuration in your config/environments/production.rb file:
# config/environments/production.rb
config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present?
if ENV["RAILS_LOG_TO_STDOUT"].present?
Logger = ActiveSupport::Logger.new(STDOUT)
Logger.formatter = config.log_formatter
config.Logger = ActiveSupport::TaggedLogging.new(Logger)
end
Now reinstall your dependencies (to generate a new Gemfile.lock):
$ bundle install
Amend the database.yml with your data and make sure the config/database.yml is using the postgresql adapter.
production:
<<: *default
database: app_production
Tоproduction:
<<: *default
adapter: postgresql
database: app_production
Run the scaffold command to create the Post.
$ rails g scaffold posts title:string content:text
Create and Migrate the database.
$ rails db:create
$ rails db:migrate
Change the main page route in routes.rb and start the server
root "posts#index"
rails s
Push your code changes to git
git init
git add .
git commit -m "Deploying Rails application
You can also clone the code. Here’s the source code of the repository: https://github.com/ishag-bac/Deploy
As we want to deploy Ruby on Rails application on Heroku, we will need the following.
Rails 6 requires Ruby 2.5.0 or above. By default, a recent version of Ruby is installed in Heroku. However, you can specify the exact version in your Gemfile using the ruby domain-specific languages. Depending on the current Ruby version running in the application, it might look like this:
ruby '2.7.2'
The same version of Ruby should be running locally as well. You can check the ruby version by running $ ruby -v.
After installing Heroku CLI and logging into your Heroku account, ensure you are in the correct directory path containing your application, then follow the instructions below.
Create an application in Heroku using the below command in the terminal.
$ heroku create
Push your code to Heroku on the master branch.
$ git push heroku master
Note:- Check the default branch name before deployment. If it uses master, use git push heroku master. Otherwise ,use git push heroku main.
Migrate the database of your application by running.
$ heroku run rails db:migrate
To seed your database with data, run.
$ heroku run rails db:seed
Get the URL of your application and visit in the browser.
$ heroku apps:info
The deployment of the source code to Heroku is done. Now you can instruct to execute a process type to Heroku. Heroku implements this process by running the associated command in a dyno. [Dyno is the basic unit of composition, a container on the Heroku server.]
Ensure that you have a dyno running the web process type with the command:
$ heroku ps:scale web=1
You can check the state of the app’s dynos. All the running dynos of your application can be listed out by applying the heroku ps command.
Using heroku open, we can open the deploying application.
If the application is not functioning correctly or you run into any problems, you must check the logs. With the help of heroku logging commands, you can get information about your application.
By running the command with the –tail flag option like this, you can also get the entire stream of logs:
$ heroku logs --tail
Check your logs if you push up your application, and it crashes (heroku ps shows state crashed) to find out what exactly went wrong while pushing up the application. Here are some issues.
Check your Bundler groups if you’re missing a gem while deploying. Heroku builds your application without the development or test groups, and if your app depends on a gem from one of these groups to run, you should move it out of the group. Therefore before deploying Ruby on Rails app to Heroku, test if it works locally, then push it to Heroku.
We hope you found our comprehensive guide useful and would try to deploy ruby on rails application on Heroku. In case of any queries, feel free to reach out to us. We will be glad to assist you in your technical endeavors. You can find relevant Ruby on Rails tutorials if that interests you. Do share this blog on social media and with your friends, and comment if you have any suggestions. Happy to help!
Original article source at: https://www.bacancytechnology.com/
1669094357
In this tutorial, I tackled two major goals:
I have been using the free tier of Heroku to serve up demo apps and create tutorial sandboxes. It's a great service, easy to use and free, but it does come with a lengthy lag time on initial page load (about 7 seconds). Thats a looooong time by anyone's standards. With a 7 second load time, according to akamai.com and kissmetrics, more than 25% of users will abandon your page well before your first div even shows up. Rather than simply upgrading to the paid tier of Heroku, I wanted to explore my options and learn some useful skills in the process.
What's more, I also have a hosted blog on Ghost. It's an excellent platform, but it's a bit pricey. Fortunately, they offer their software open source and provide a great tutorial on getting it up and running with Node and MySQL. You simply need somewhere to host it.
By parting ways with my hosted blog and serving up several resources from one server, I can provide a better UX for my personal apps and save a few bucks at the same time. This post organizes some of the best tutorials on the web to get this done quickly and securely.
This requires several different technologies working together to accomplish the goal:
Tech | Purpose |
---|---|
EC2 | provide cheap, reliable cloud computing power |
Ubuntu | the operating system that handles running our programs |
Docker | an isolation layer to provide a consistent execution environment |
Nginx | handle requests in a robust and secure way |
Certbot | serve up SSL/HTTPS secured web applications, and in turn, increase SSO (search engine optimization) |
Ghost | provide a simple blog with GUI and persistance |
React | allow for fast, composable web applications |
After completing this tutorial, you will be able to:
Current Hosted Solutions (No Lag Time)
Resource | Service | Price / Month | Info |
---|---|---|---|
Blog | Ghost Pro | $19 | https://ghost.org/pricing |
Personal Apps | Heroku Hobby | $7/app | https://www.heroku.com/pricing |
Self Hosted Options
Resource | Service | Price / Month | Info |
---|---|---|---|
Blog and Apps | AWS EC2 T2 Micro (1GB Memory) | ~$10 | https://aws.amazon.com/ec2/pricing/on-demand |
Blog and Apps | Linode (1GB Memory) | $5 | https://www.linode.com/pricing/ |
Blog and Apps | Digital Ocean (1GB Memory) | $5 | https://www.digitalocean.com/pricing |
So with a hosted solution, for one blog and one app, I would be paying $26 per month and that would go up $7/month with each new app. Per year, thats $312 + $84 per additional app. With a little bit of leg work outlined in this post, I am hosting multiple apps and a blog for less than $10/month.
I decided to go with the AWS solution. While it is more expensive, it is a super popular enterprise technology that I want to become more familiar with.
A BIG THANKS to all the folks who authored any of the referenced material. Much of this post consists of links and snippets of resources that proved to work well and includes the slight modifications needed along the way to suite my needs.
Thank you, as well, for reading. Let's get to it!
Here is how to create a new EC2 instance.
Resource: https://www.nginx.com/blog/setting-up-nginx
All you really need is the above tutorial to be on your way with setting up an EC2 instance and installing Nginx. I stopped after the EC2 creation since Nginx gets installed during the Ghost blog platform setup.
Resource: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html
Further down the road, you are going to point your DNS (domain name system) at your EC2 instance's public IP address. That means you don't want it to change for any reason (for example, stopping and starting the instance). There are two ways to accomplish this:
Both options provide a free static IP address. In this tutorial, I went with the Elastic IP to accomplish this goal as it was really straightforward to add to my server after having already set it up.
Follow the steps in the above resource to create an elastic IP address and associate it with your EC2 instance.
Resource: https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-16-04
I followed this tutorial to the 'T'...worked like a charm. You'll set up your own super user with its own SSH key and create a firewall restricting incoming traffic to only allow SSH.
In a minute you'll open up both HTTP and HTTPS for requests.
I use Name.com for my DNS hosting because they have a decent UI and are local to Denver (where I reside). I already own petej.org
and have been pointing it to a github pages hosted static site. I decided to set up a sub-domain for the blog -- blog.petej.org -- using A records to point to my EC2 instance's public IP address. I created two A records, one to handle the www
prefix and another to handle the bare URL:
Now via the command line, use the dig
utility to check to see if the new A record is working. This can be done from your local machine or the EC2 instance:
$ dig A blog.petej.org
; <<>> DiG 9.9.7-P3 <<>> A blog.petej.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44050
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;blog.petej.org. IN A
;; ANSWER SECTION:
blog.petej.org. 300 IN A 35.153.44.46
;; Query time: 76 msec
;; SERVER: 75.75.75.75#53(75.75.75.75)
;; WHEN: Sat Jan 27 10:13:50 MST 2018
;; MSG SIZE rcvd: 59
Note: The A records take effect nearly instantaneously, but can take up to an hour to resolve any caching from a previous use of this URL. So if you already had your domain name set up and working, this may take a little while.
Nice: domain --> √. Now you need to get your EC2 instance serving up some content!
Resource: https://docs.ghost.org/install/ubuntu/
Another great tutorial. I followed it every step of the way and it was golden. There are some steps that we have already covered above, such as the best practices of setting up an Ubuntu instance, so you can skip those. Be sure to start from the Update Packages section (under Server Setup).
Note: Follow this setup exactly in order. My first time around I neglected to set a user for the MySQL database and ended up having to remove Ghost from the machine, reinstall, and start from the beginning.
After stepping through the Ghost install process, you should now have a blog up and running at your domain name - check it out in the browser!
What have you accomplished?
You are now going to:
Onward...
Install git on the EC2 instance:
$ sudo apt-get install git
Create a new SSH key specifically for GitHub access: https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
Because you set up a user for the Ubuntu server earlier, the /root directory and your ~ directory (user's home directory) are different. To account for that, on the ssh-add
step do this instead:
cp /root/.ssh/id_rsa ~/.ssh/id_rsa
cd ~/.ssh
ssh-add
$ sudo cat ~/.ssh/id_rsa
Copy the output and add it to GitHub as a new SSH key as detailed in the below link.
Start with step 2 --> https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account
You are all set up to git
. Clone and then push a commit to a repo to make sure everything is wired up correctly.
Resource: https://medium.com/ai2-blog/dockerizing-a-react-application-3563688a2378
Once you have your React app running locally with Docker, push the image up to Docker Hub:
You will need a Docker Hub account --> https://hub.docker.com
$ docker login
Username:
Password:
$ docker tag <image-name> <username>/<image-name>:<tag-name>
$ docker push <username>/<image-name>
This will take a while. About 5 min. Coffee break...
And we're back. Go ahead and log in to GitHub and make sure that your image has been uploaded.
Now back to your EC2 instance. SSH into it.
Install docker:
$ sudo apt install docker.io
Pull down the Docker image locally that you recently pushed up:
$ sudo docker pull <username>/<image-name>
Get the image id and use it to fire up the app:
$ sudo docker images
# Copy the image ID
$ sudo docker run -d -it -p 5000:5000 <image-id>
Now that you have the React app running, let's expose it to the world by setting up the Nginx config.
Resource: https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-16-04
Note: Instead of using /etc/nginx/sites-available/default like the tutorial suggests, I made one specific for the URL (better practice and more flexible going forward) --> circle-grid.petej.org.conf file, leaving the default file completely alone.
We also need to set up a symlink:
$ sudo ln -s /etc/nginx/sites-available/circle-grid.petej.org.conf /etc/nginx/sites-enabled/
Note: Why the symlink? As you can see if you take a look in /etc/nginx/nginx.conf, only the files in the /sites-enabled are being taken into account. The symlink will take care of this for us by representing this file in the sites_available file making it discoverable by Nginx. If you've used Apache before you will be familiar with this pattern. You can also remove symlinks just like you would remove a file:
rm ./path/to/symlink
.
More about 'symlinks': http://manpages.ubuntu.com/manpages/xenial/en/man7/symlink.7.html
Now to be sure that Certbot configured a cron job to auto renew your certificates run this command:
$ ls /etc/cron.d/
If there is a certbot file in there, you are good go.
If not, follow these steps:
Test the renewal process manually:
$ sudo certbot renew --dry-run
If that is successful, then:
$ nano /etc/cron.d/certbot
Add this line to the file:
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew
Save it, all done.
You have now configured a task to run every 12 hours that will upgrade any certs that are within 30 days of expiration.
You should now be able to:
I hope this was a helpful collection of links and tutorials to get you off the ground with a personal app server. Feel free to contact me (pete dot topleft at gmail dot com) with any questions or comments.
Thanks for reading.
Original article source at: https://testdriven.io/
1669035120
We use Heroku to host the TestDriven.io learning platform so that we can focus on application development rather than configuring web servers, installing Linux packages, setting up load balancers, and everything else that goes along with infrastructure management on a traditional server.
This article aims to simplify the process of deploying, maintaining, and scaling a production-grade Django app on Heroku.
We'll also review some tips and tricks for simplifying the deployment process. At the end, you'll find a production checklist for deploying a new app to production.
Why Heroku? Like Django, Heroku embraces the "batteries included" philosophy. It's an opinionated environment, but it's also an environment that you don't have to manage -- so you can focus on application development rather than the environment supporting it.
If you use your own infrastructure or an Infrastructure as a Service (IaaS) solution -- like DigitalOcean Droplets, Amazon EC2, Google Compute Engine, to name a few -- you must either hire a sys admin/devops engineer or take on that role yourself. The former costs money while the latter slows down your velocity. Heroku will probably end up costing you more in hosting than an IaaS solution, but you will save money since you don't need to hire someone to administer the infrastructure and you can move faster on the application, which is what matters most at the end of the day.
Tips:
Tips:
The Heroku runtime is both stateless and immutable, which helps enable continuous delivery. On each application deploy, a new virtual machine is constructed, configured, and moved into production.
Because of this, you do not need to worry about:
Heroku works with a number of Continuous Integration (CI) services, like Circle and Travis, and they also offer their own CI solution -- Heroku CI.
Tips:
manage.py check --deploy
) in your production CI build.git tag -a "$ENVIRONMENT/${VERSION}"
.Tips:
Tips:
For staging, use a different Heroku app. Make sure to turn maintenance mode on when it's not in use so that Google's crawlers don't inadvertently come across it.
Write tests. Tests are a safeguard, so you don't accidentally change the functionality of your application. It's much better to catch a bug locally from your test suite than by a customer in production.
Tips:
Ignore the traditional testing pyramid. Spend half your time writing Django unit tests (with both pytest and Hypothesis). Spend the other half writing browser-based integration and end-to-end tests with Cypress. Compared to Selenium, Cypress tests are much easier to write and maintain. We recommend incorporating Cypress into your everyday TDD workflow. Review Modern Front-End Testing with Cypress for more info on this.
Monitoring and logging are a crucial part of a your app's reliability, making it easier to:
Your logs should always have a timestamp and a log level. They should also be human readable and easy to parse.
On the monitoring side of things, set up alerts to help reduce and preempt downtimes. Set up notifications so you can fix issues and address bottlenecks before your customers start to complain.
As you have seen, Heroku provides a number of services via the add-on system. This system is one of the powerful tools that you get out of the box from Heroku. You have hundreds of services at your disposable that take minutes to configure, many of which are useful for logging, monitoring, and error tracking.
Tips:
When it comes to security, people are generally the weakest link. Your development team should be aware of some of the more common security vulnerabilities. Security Training for Engineers and Heroku's Security guide are great places to start along with the following OWASP cheat sheets:
Tips:
Hopefully this article provided some useful information that will help simplify the process of deploying and maintaining a production Django app on Heroku.
Remember: Web development is complex because of all the moving pieces. You can counter that by:
Curious about what the full architecture looks like with Heroku?
Once you have Celery and Gunicorn configured, you can focus the majority, if not all, of your time on developing your application -- everything else is an add-on.
Recommended resources:
Deploying a new Django app to Heroku? Review the following checklist for help. Make sure you document the deployment workflow throughout the entire process.
Frontend:
Django:
False
.True
.Continuous Integration:
python manage.py check --deploy
against the production settings.Heroku:
Frontend:
Cheers!
Original article source at: https://testdriven.io/
1668865860
Assume that you're a data scientist. Following a typical machine learning workflow, you'll define the problem statement along with objectives based on business needs. You'll then start finding and cleaning data followed by analyzing the collected data and building and training your model. Once trained, you'll evaluate the results. This process of finding and cleansing data, training the model, and evaluating the results will continue until you're satisfied with the results. You'll then refactor the code and package it up in a module, along with its dependencies, in preparation for testing and deployment.
What happens next? Do you hand the model off to another team to test and deploy the model? Or do you have to handle this yourself? Either way, it's important to understand what happens when a model gets deployed. You may have to deploy the model yourself one day. Or maybe you have a side project that you'd just like to stand up in production and make available to end users.
In this tutorial, we'll look at how to deploy a machine learning model, for predicting stock prices, into production on Heroku as a RESTful API using FastAPI.
By the end of this post you should be able to:
FastAPI is a modern, high-performance, batteries-included Python web framework that's perfect for building RESTful APIs. It can handle both synchronous and asynchronous requests and has built-in support for data validation, JSON serialization, authentication and authorization, and OpenAPI.
Highlights:
Review the Features guide from the official docs for more info. It's also encouraged to review Alternatives, Inspiration, and Comparisons, which details how FastAPI compares to other web frameworks and technologies, for context.
Create a project folder called "fastapi-ml":
$ mkdir fastapi-ml
$ cd fastapi-ml
Then, create and activate a new virtual environment:
$ python3.8 -m venv env
$ source env/bin/activate
(env)$
Add a two new files: requirements.txt and main.py.
Unlike Django or Flask, FastAPI does not have a built-in development server. So, we'll use Uvicorn, an ASGI server, to serve up FastAPI.
New to ASGI? Read through the excellent Introduction to ASGI: Emergence of an Async Python Web Ecosystem blog post.
Add FastAPI and Uvicorn to the requirements file:
fastapi==0.68.0
uvicorn==0.14.0
Install the dependencies:
(env)$ pip install -r requirements.txt
Then, within main.py, create a new instance of FastAPI and set up a quick test route:
from fastapi import FastAPI
app = FastAPI()
@app.get("/ping")
def pong():
return {"ping": "pong!"}
Start the app:
(env)$ uvicorn main:app --reload --workers 1 --host 0.0.0.0 --port 8008
So, we defined the following settings for Uvicorn:
--reload
enables auto-reload so the server will restart after changes are made to the code base.--workers 1
provides a single worker process.--host 0.0.0.0
defines the address to host the server on.--port 8008
defines the port to host the server on.main:app
tells Uvicorn where it can find the FastAPI ASGI application -- e.g., "within the the 'main.py' file, you'll find the ASGI app, app = FastAPI()
.
Navigate to http://localhost:8008/ping. You should see:
{
"ping": "pong!"
}
The model that we'll deploy uses Prophet to predict stock market prices.
Add the following functions to train the model and generate a prediction to a new file called model.py:
import datetime
from pathlib import Path
import joblib
import pandas as pd
import yfinance as yf
from fbprophet import Prophet
BASE_DIR = Path(__file__).resolve(strict=True).parent
TODAY = datetime.date.today()
def train(ticker="MSFT"):
# data = yf.download("^GSPC", "2008-01-01", TODAY.strftime("%Y-%m-%d"))
data = yf.download(ticker, "2020-01-01", TODAY.strftime("%Y-%m-%d"))
data.head()
data["Adj Close"].plot(title=f"{ticker} Stock Adjusted Closing Price")
df_forecast = data.copy()
df_forecast.reset_index(inplace=True)
df_forecast["ds"] = df_forecast["Date"]
df_forecast["y"] = df_forecast["Adj Close"]
df_forecast = df_forecast[["ds", "y"]]
df_forecast
model = Prophet()
model.fit(df_forecast)
joblib.dump(model, Path(BASE_DIR).joinpath(f"{ticker}.joblib"))
def predict(ticker="MSFT", days=7):
model_file = Path(BASE_DIR).joinpath(f"{ticker}.joblib")
if not model_file.exists():
return False
model = joblib.load(model_file)
future = TODAY + datetime.timedelta(days=days)
dates = pd.date_range(start="2020-01-01", end=future.strftime("%m/%d/%Y"),)
df = pd.DataFrame({"ds": dates})
forecast = model.predict(df)
model.plot(forecast).savefig(f"{ticker}_plot.png")
model.plot_components(forecast).savefig(f"{ticker}_plot_components.png")
return forecast.tail(days).to_dict("records")
def convert(prediction_list):
output = {}
for data in prediction_list:
date = data["ds"].strftime("%m/%d/%Y")
output[date] = data["trend"]
return output
Here, we defined three functions:
train
downloads historical stock data with yfinance, creates a new Prophet model, fits the model to the stock data, and then serializes and saves the model as a Joblib file.predict
loads and deserializes the saved model, generates a new forecast, creates images of the forecast plot and forecast components, and returns the days included in the forecast as a list of dicts.convert
takes the list of dicts from predict
and outputs a dict of dates and forecasted values (i.e., {"07/02/2020": 200}
).This model was developed by Andrew Clark.
Update the requirements file:
# pystan must be installed before prophet
# you may need to pip install it on it's own
# before installing the remaining requirements
# pip install pystan==2.19.1.1
pystan==2.19.1.1
fastapi==0.68.0
uvicorn==0.14.0
fbprophet==0.7.1
joblib==1.0.1
pandas==1.3.1
plotly==5.1.0
yfinance==0.1.63
Install the new dependencies:
(env)$ pip install -r requirements.txt
If you have problems installing the dependencies on your machine, you may want to use Docker instead. For instructions on how to run the application with Docker, review the README on the fastapi-ml repo on GitHub.
To test, open a new Python shell and run the following commands:
(env)$ python
>>> from model import train, predict, convert
>>> train()
>>> prediction_list = predict()
>>> convert(prediction_list)
You should see something similar to:
{
'08/12/2021': 282.99012951691776,
'08/13/2021': 283.31354121099446,
'08/14/2021': 283.63695290507127,
'08/15/2021': 283.960364599148,
'08/16/2021': 284.2837762932248,
'08/17/2021': 284.6071879873016,
'08/18/2021': 284.93059968137834
}
This is the predicted prices for the next seven days for Microsoft Corporation (MSFT). Take note of the saved MSFT.joblib model along with the two images:
Go ahead and train a few more models to work with. For example:
>>> train("GOOG")
>>> train("AAPL")
>>> train("^GSPC")
Exit the shell.
With that, let's wire up our API.
Add a /predict
endpoint by updating main.py like so:
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from model import convert, predict
app = FastAPI()
# pydantic models
class StockIn(BaseModel):
ticker: str
class StockOut(StockIn):
forecast: dict
# routes
@app.get("/ping")
async def pong():
return {"ping": "pong!"}
@app.post("/predict", response_model=StockOut, status_code=200)
def get_prediction(payload: StockIn):
ticker = payload.ticker
prediction_list = predict(ticker)
if not prediction_list:
raise HTTPException(status_code=400, detail="Model not found.")
response_object = {"ticker": ticker, "forecast": convert(prediction_list)}
return response_object
So, in the new get_prediction
view function, we passed in a ticker to our model's predict
function and then used the convert
function to create the output for the response object. We also took advantage of a pydantic schema to covert the JSON payload to a StockIn
object schema. This provides automatic type validation. The response object uses the StockOut
schema object to convert the Python dict -- {"ticker": ticker, "forecast": convert(prediction_list)}
-- to JSON, which, again, is validated.
For the web app, let's just output the forecast in JSON. Comment out the following lines in predict
:
# model.plot(forecast).savefig(f"{ticker}_plot.png")
# model.plot_components(forecast).savefig(f"{ticker}_plot_components.png")
Full function:
def predict(ticker="MSFT", days=7):
model_file = Path(BASE_DIR).joinpath(f"{ticker}.joblib")
if not model_file.exists():
return False
model = joblib.load(model_file)
future = TODAY + datetime.timedelta(days=days)
dates = pd.date_range(start="2020-01-01", end=future.strftime("%m/%d/%Y"),)
df = pd.DataFrame({"ds": dates})
forecast = model.predict(df)
# model.plot(forecast).savefig(f"{ticker}_plot.png")
# model.plot_components(forecast).savefig(f"{ticker}_plot_components.png")
return forecast.tail(days).to_dict("records")
Run the app:
(env)$ uvicorn main:app --reload --workers 1 --host 0.0.0.0 --port 8008
Then, in a new terminal window, use curl to test the endpoint:
$ curl \
--header "Content-Type: application/json" \
--request POST \
--data '{"ticker":"MSFT"}' \
http://localhost:8008/predict
You should see something like:
{
"ticker":"MSFT",
"forecast":{
"08/12/2021": 282.99012951691776,
"08/13/2021": 283.31354121099446,
"08/14/2021": 283.63695290507127,
"08/15/2021": 283.960364599148,
"08/16/2021": 284.2837762932248,
"08/17/2021": 284.6071879873016,
"08/18/2021": 284.93059968137834
}
}
What happens if the ticker model doesn't exist?
$ curl \
--header "Content-Type: application/json" \
--request POST \
--data '{"ticker":"NONE"}' \
http://localhost:8008/predict
{
"detail": "Model not found."
}
Heroku is a Platform as a Service (PaaS) that provides hosting for web applications. They offer abstracted environments where you don't have to manage the underlying infrastructure, making it easy to manage, deploy, and scale web applications. With just a few clicks you can have your app up and running, ready to receive traffic.
Sign up for a Heroku account (if you don’t already have one), and then install the Heroku CLI (if you haven't already done so).
Next, log in to your Heroku account via the CLI:
$ heroku login
You'll be prompted to press any key to open your web browser to complete login.
Create a new app on Heroku:
$ heroku create
You should see something similar to:
Creating app... done, ⬢ tranquil-cliffs-74287
https://tranquil-cliffs-74287.herokuapp.com/ | https://git.heroku.com/tranquil-cliffs-74287.git
Next, we'll use Heroku's Container Registry to deploy the application with Docker. Put simply, with the Container Registry, you can deploy pre-built Docker images to Heroku.
Why Docker? We want to minimize the differences between the production and development environments. This is especially important with this project, since it relies on a number of data science dependencies that have very specific system requirements.
Log in to the Heroku Container Registry, to indicate to Heroku that we want to use the Container Runtime:
$ heroku container:login
Add a Dockerfile file to the project root:
FROM python:3.8
WORKDIR /app
RUN apt-get -y update && apt-get install -y \
python3-dev \
apt-utils \
python-dev \
build-essential \
&& rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade setuptools
RUN pip install \
cython==0.29.24 \
numpy==1.21.1 \
pandas==1.3.1 \
pystan==2.19.1.1
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn -w 3 -k uvicorn.workers.UvicornWorker main:app --bind 0.0.0.0:$PORT
Here, after pulling the Python 3.8 base image, we installed the appropriate dependencies, copied over the app, and ran Gunicorn, a production-grade WSGI application server, to manage Uvicorn with three worker processes. This config takes advantage of both concurrency (via Uvicorn) and parallelism (via Gunicorn workers).
Add Gunicorn to the requirements.txt file:
# pystan must be installed before prophet
# you may need to pip install it on it's own
# before installing the remaining requirements
# pip install pystan==2.19.1.1
pystan==2.19.1.1
fastapi==0.68.0
gunicorn==20.1.0
uvicorn==0.14.0
fbprophet==0.7.1
joblib==1.0.1
pandas==1.3.1
plotly==5.1.0
yfinance==0.1.63
Add a .dockerignore file as well:
__pycache__
env
Build the Docker image and tag it with the following format:
registry.heroku.com/<app>/<process-type>
Make sure to replace <app>
with the name of the Heroku app that you just created and <process-type>
with web
since this will be for a web process.
For example:
$ docker build -t registry.heroku.com/tranquil-cliffs-74287/web .
It will take several minutes to install fbprophet
. Be patient. You should see it hang here for some time:
Running setup.py install for fbprophet: started
Once done, you can run the image like so:
$ docker run --name fastapi-ml -e PORT=8008 -p 8008:8008 -d registry.heroku.com/tranquil-cliffs-74287/web:latest
Ensure http://localhost:8008/ping works as expected. Once done, stop and remove the container:
$ docker stop fastapi-ml
$ docker rm fastapi-ml
Push the image to the registry:
$ docker push registry.heroku.com/tranquil-cliffs-74287/web
Release the image:
$ heroku container:release -a tranquil-cliffs-74287 web
This will run the container. You should now be able to view your app. Make sure to test the /predict
endpoint:
$ curl \
--header "Content-Type: application/json" \
--request POST \
--data '{"ticker":"MSFT"}' \
https://<YOUR_HEROKU_APP_NAME>.herokuapp.com/predict
Finally, check out the interactive API documentation that FastAPI automatically generates at https://<YOUR_HEROKU_APP_NAME>.herokuapp.com/docs
:
This tutorial looked at how to deploy a machine learning model, for predicting stock prices, into production on Heroku as a RESTful API using FastAPI.
What's next?
Check out the following resources for help with the above pieces:
If you're deploying a non-trivial model, I recommend adding model versioning and support for counterfactual analysis along with model monitoring (model and feature drift, bias detection). Check out the Monitaur platform for help in these areas.
You can find the code in the fastapi-ml repo.
Original article source at: https://testdriven.io/
1668764105
This article looks at how to deploy a Django app to Heroku with Docker via the Heroku Container Runtime.
By the end of this tutorial, you will be able to:
Along with the traditional Git plus slug compiler deployments (git push heroku master
), Heroku also supports Docker-based deployments, with the Heroku Container Runtime.
A container runtime is program that manages and runs containers. If you'd like to dive deeper into container runtimes, check out A history of low-level Linux container runtimes.
Docker-based deployments have many advantages over the traditional approach:
In general, Docker-based deployments give you greater flexibility and control over the deployment environment. You can deploy the apps you want within the environment that you want. That said, you're now responsible for security updates. With the traditional Git-based deployments, Heroku is responsible for this. They apply relevant security updates to their Stacks and migrate your app to the new Stacks as necessary. Keep this in mind.
There are currently two ways to deploy apps with Docker to Heroku:
The major difference between these two is that with the latter approach -- e.g., via the Build Manifest -- you have access to the Pipelines, Review, and Release features. So, if you're converting an app from a Git-based deployment to Docker and are using any of those features then you should use the Build Manifest approach.
Rest assured, we'll look at both approaches in this article.
In either case you will still have access to the Heroku CLI, all of the powerful addons, and the dashboard. All of these features work with the Container Runtime, in other words.
Deployment Type | Deployment Mechanism | Security Updates (who handles) | Access to Pipelines, Review, Release | Access to CLI, Addons, and Dashboard | Slug size limits |
---|---|---|---|---|---|
Git + Slug Compiler | Git Push | Heroku | Yes | Yes | Yes |
Docker + Container Runtime | Docker Push | You | No | Yes | No |
Docker + Build Manifest | Git Push | You | Yes | Yes | No |
Keep in mind Docker-based deployments are limited to the same constraints that Git-based deployments are. For example, persistent volumes are not supported since the file system is ephemeral and web processes only support HTTP(S) requests. For more on this, review Dockerfile commands and runtime.
Docker | Heroku |
---|---|
Dockerfile | BuildPack |
Image | Slug |
Container | Dyno |
Make a project directory, create and activate a new virtual environment, and install Django:
$ mkdir django-heroku-docker
$ cd django-heroku-docker
$ python3.10 -m venv env
$ source env/bin/activate
(env)$ pip install django==3.2.9
Feel free to swap out virtualenv and Pip for Poetry or Pipenv. For more, review Modern Python Environments.
Next, create a new Django project, apply the migrations, and run the server:
(env)$ django-admin startproject hello_django .
(env)$ python manage.py migrate
(env)$ python manage.py runserver
Navigate to http://localhost:8000/ to view the Django welcome screen. Kill the server and exit from the virtual environment once done.
Add a Dockerfile to the project root:
# pull official base image
FROM python:3.10-alpine
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG 0
# install psycopg2
RUN apk update \
&& apk add --virtual build-essential gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2
# install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
# add and run as non-root user
RUN adduser -D myuser
USER myuser
# run gunicorn
CMD gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT
Here, we started with an Alpine-based Docker image for Python 3.10. We then set a working directory along with two environment variables:
PYTHONDONTWRITEBYTECODE
: Prevents Python from writing pyc files to discPYTHONUNBUFFERED
: Prevents Python from buffering stdout and stderrNext, we installed system-level dependencies and Python packages, copied over the project files, created and switched to a non-root user (which is recommended by Heroku), and used CMD to run Gunicorn when a container spins up at runtime. Take note of the $PORT
variable. Essentially, any web server that runs on the Container Runtime must listen for HTTP traffic at the $PORT
environment variable, which is set by Heroku at runtime.
Create a requirements.txt file:
Django==3.2.9
gunicorn==20.1.0
Then add a .dockerignore file:
__pycache__
*.pyc
env/
db.sqlite3
Update the SECRET_KEY
, DEBUG
, and ALLOWED_HOSTS
variables in settings.py:
SECRET_KEY = os.environ.get('SECRET_KEY', default='foo')
DEBUG = int(os.environ.get('DEBUG', default=0))
ALLOWED_HOSTS = ['localhost', '127.0.0.1']
Don't forget the import:
import os
To test locally, build the image and run the container, making sure to pass in the appropriate environment variables:
$ docker build -t web:latest .
$ docker run -d --name django-heroku -e "PORT=8765" -e "DEBUG=1" -p 8007:8765 web:latest
Ensure then app is running at http://localhost:8007/ in your browser. Stop then remove the running container once done:
$ docker stop django-heroku
$ docker rm django-heroku
Add a .gitignore:
__pycache__
*.pyc
env/
db.sqlite3
Next, let's create a quick Django view to easily test the app when debug mode is off.
Add a views.py file to the "hello_django" directory:
from django.http import JsonResponse
def ping(request):
data = {'ping': 'pong!'}
return JsonResponse(data)
Next, update urls.py:
from django.contrib import admin
from django.urls import path
from .views import ping
urlpatterns = [
path('admin/', admin.site.urls),
path('ping/', ping, name="ping"),
]
Test this again with debug mode off:
$ docker build -t web:latest .
$ docker run -d --name django-heroku -e "PORT=8765" -e "DEBUG=0" -p 8007:8765 web:latest
Verify http://localhost:8007/ping/ works as expected:
{
"ping": "pong!"
}
Stop then remove the running container once done:
$ docker stop django-heroku
$ docker rm django-heroku
If you'd like to use WhiteNoise to manage your static assets, first add the package to the requirements.txt file:
Django==3.2.9
gunicorn==20.1.0
whitenoise==5.3.0
Update the middleware in settings.py like so:
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware', # new
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
Then configure the handling of your staticfiles with STATIC_ROOT
:
STATIC_ROOT = BASE_DIR / 'staticfiles'
FInally, add compression and caching support:
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
Add the collectstatic
command to the Dockerfile:
# pull official base image
FROM python:3.10-alpine
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG 0
# install psycopg2
RUN apk update \
&& apk add --virtual build-essential gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2
# install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
# collect static files
RUN python manage.py collectstatic --noinput
# add and run as non-root user
RUN adduser -D myuser
USER myuser
# run gunicorn
CMD gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT
To test, build the new image and spin up a new container:
$ docker build -t web:latest .
$ docker run -d --name django-heroku -e "PORT=8765" -e "DEBUG=1" -p 8007:8765 web:latest
You should be able to view the static files when you run:
$ docker exec django-heroku ls /app/staticfiles
$ docker exec django-heroku ls /app/staticfiles/admin
Stop then remove the running container again:
$ docker stop django-heroku
$ docker rm django-heroku
To get Postgres up and running, we'll use the dj_database_url package to generate the proper database configuration dictionary for the Django settings based on a DATABASE_URL
environment variable.
Add the dependency to the requirements file:
Django==3.2.9
dj-database-url==0.5.0
gunicorn==20.1.0
whitenoise==5.3.0
Then, make the following changes to the settings to update the database configuration if the DATABASE_URL
is present:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
DATABASE_URL = os.environ.get('DATABASE_URL')
db_from_env = dj_database_url.config(default=DATABASE_URL, conn_max_age=500, ssl_require=True)
DATABASES['default'].update(db_from_env)
So, if the DATABASE_URL
is not present, SQLite will still be used.
Add the import to the top as well:
import dj_database_url
We'll test this out in a bit after we spin up a Postgres database on Heroku.
Sign up for Heroku account (if you don’t already have one), and then install the Heroku CLI (if you haven't already done so).
Create a new app:
$ heroku create
Creating app... done, ⬢ limitless-atoll-51647
https://limitless-atoll-51647.herokuapp.com/ | https://git.heroku.com/limitless-atoll-51647.git
Add the SECRET_KEY
environment variable:
$ heroku config:set SECRET_KEY=SOME_SECRET_VALUE -a limitless-atoll-51647
Change
SOME_SECRET_VALUE
to a randomly generated string that's at least 50 characters.
Add the above Heroku URL to the list of ALLOWED_HOSTS
in hello_django/settings.py like so:
ALLOWED_HOSTS = ['localhost', '127.0.0.1', 'limitless-atoll-51647.herokuapp.com']
Make sure to replace
limitless-atoll-51647
in each of the above commands with the name of your app.
At this point, we're ready to start deploying Docker images to Heroku. Did you decide which approach you'd like to take?
Unsure? Try them both!
Skip this section if you're using the Build Manifest approach.
Again, with this approach, you can deploy pre-built Docker images to Heroku.
Log in to the Heroku Container Registry, to indicate to Heroku that we want to use the Container Runtime:
$ heroku container:login
Re-build the Docker image and tag it with the following format:
registry.heroku.com/<app>/<process-type>
Make sure to replace <app>
with the name of the Heroku app that you just created and <process-type>
with web
since this will be for a web process.
For example:
$ docker build -t registry.heroku.com/limitless-atoll-51647/web .
Push the image to the registry:
$ docker push registry.heroku.com/limitless-atoll-51647/web
Release the image:
$ heroku container:release -a limitless-atoll-51647 web
This will run the container. You should be able to view the app at https://APP_NAME.herokuapp.com. It should return a 404.
Try running
heroku open -a limitless-atoll-51647
to open the app in your default browser.
Verify https://APP_NAME.herokuapp.com/ping works as well:
{
"ping": "pong!"
}
You should also be able to view the static files:
$ heroku run ls /app/staticfiles -a limitless-atoll-51647
$ heroku run ls /app/staticfiles/admin -a limitless-atoll-51647
Make sure to replace
limitless-atoll-51647
in each of the above commands with the name of your app.
Jump down to the "Postgres Test" section once done.
Skip this section if you're using the Container Registry approach.
Again, with the Build Manifest approach, you can have Heroku build and deploy Docker images based on a heroku.yml manifest file.
Set the Stack of your app to container:
$ heroku stack:set container -a limitless-atoll-51647
Add a heroku.yml file to the project root:
build:
docker:
web: Dockerfile
Here, we're just telling Heroku which Dockerfile to use for building the image.
Along with build
, you can also define the following stages:
setup
is used to define Heroku addons and configuration variables to create during app provisioning.release
is used to define tasks that you'd like to execute during a release.run
is used to define which commands to run for the web and worker processes.Be sure to review the Heroku documentation to learn more about these four stages.
It's worth noting that the
gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT
command could be removed from the Dockerfile and added to the heroku.yml file under therun
stage:build: docker: web: Dockerfile run: web: gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT
Also, be sure to place the 'collectstatic' command inside your Dockerfile. Don't move it to the
release
stage. For more on this, review this Stack Overflow question.
Next, install the heroku-manifest
plugin from the beta CLI channel:
$ heroku update beta
$ heroku plugins:install @heroku-cli/plugin-manifest
With that, initialize a Git repo and create a commit.
Then, add the Heroku remote:
$ heroku git:remote -a limitless-atoll-51647
Push the code up to Heroku to build the image and run the container:
$ git push heroku master
You should be able to view the app at https://APP_NAME.herokuapp.com. It should return a 404.
Try running
heroku open -a limitless-atoll-51647
to open the app in your default browser.
Verify https://APP_NAME.herokuapp.com/ping works as well:
{
"ping": "pong!"
}
You should also be able to view the static files:
$ heroku run ls /app/staticfiles -a limitless-atoll-51647
$ heroku run ls /app/staticfiles/admin -a limitless-atoll-51647
Make sure to replace
limitless-atoll-51647
in each of the above commands with the name of your app.
Create the database:
$ heroku addons:create heroku-postgresql:hobby-dev -a limitless-atoll-51647
This command automatically sets the
DATABASE_URL
environment variable for the container.
Once the database is up, run the migrations:
$ heroku run python manage.py makemigrations -a limitless-atoll-51647
$ heroku run python manage.py migrate -a limitless-atoll-51647
Then, jump into psql to view the newly created tables:
$ heroku pg:psql -a limitless-atoll-51647
# \dt
List of relations
Schema | Name | Type | Owner
--------+----------------------------+-------+----------------
public | auth_group | table | siodzhzzcvnwwp
public | auth_group_permissions | table | siodzhzzcvnwwp
public | auth_permission | table | siodzhzzcvnwwp
public | auth_user | table | siodzhzzcvnwwp
public | auth_user_groups | table | siodzhzzcvnwwp
public | auth_user_user_permissions | table | siodzhzzcvnwwp
public | django_admin_log | table | siodzhzzcvnwwp
public | django_content_type | table | siodzhzzcvnwwp
public | django_migrations | table | siodzhzzcvnwwp
public | django_session | table | siodzhzzcvnwwp
(10 rows)
# \q
Again, make sure to replace
limitless-atoll-51647
in each of the above commands with the name of your Heroku app.
Sign up for a GitLab account (if necessary), and then create a new project (again, if necessary).
Retrieve your Heroku auth token:
$ heroku auth:token
Then, save the token as a new variable called HEROKU_AUTH_TOKEN
within your project's CI/CD settings: Settings > CI / CD > Variables.
Next, we need to add a GitLab CI/CD config file called .gitlab-ci.yml to the project root. The contents of this file will vary based on the approach used.
Skip this section if you're using the Build Manifest approach.
.gitlab-ci.yml:
image: docker:stable
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
HEROKU_APP_NAME: <APP_NAME>
HEROKU_REGISTRY_IMAGE: registry.heroku.com/${HEROKU_APP_NAME}/web
stages:
- build_and_deploy
build_and_deploy:
stage: build_and_deploy
script:
- apk add --no-cache curl
- docker login -u _ -p $HEROKU_AUTH_TOKEN registry.heroku.com
- docker pull $HEROKU_REGISTRY_IMAGE || true
- docker build
--cache-from $HEROKU_REGISTRY_IMAGE
--tag $HEROKU_REGISTRY_IMAGE
--file ./Dockerfile
"."
- docker push $HEROKU_REGISTRY_IMAGE
- chmod +x ./release.sh
- ./release.sh
release.sh:
#!/bin/sh
IMAGE_ID=$(docker inspect ${HEROKU_REGISTRY_IMAGE} --format={{.Id}})
PAYLOAD='{"updates": [{"type": "web", "docker_image": "'"$IMAGE_ID"'"}]}'
curl -n -X PATCH https://api.heroku.com/apps/$HEROKU_APP_NAME/formation \
-d "${PAYLOAD}" \
-H "Content-Type: application/json" \
-H "Accept: application/vnd.heroku+json; version=3.docker-releases" \
-H "Authorization: Bearer ${HEROKU_AUTH_TOKEN}"
Here, we defined a single build_and_deploy
stage where we:
Make sure to replace
<APP_NAME>
with your Heroku app's name.
With that, initialize a Git repo, commit, add the GitLab remote, and push your code up to GitLab to trigger a new pipeline. This will run the build_and_deploy
stage as a single job. Once complete, a new release should automatically be created on Heroku.
Skip this section if you're using the Container Registry approach.
.gitlab-ci.yml:
variables:
HEROKU_APP_NAME: <APP_NAME>
stages:
- deploy
deploy:
stage: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_AUTH_TOKEN
Here, we defined a single deploy
stage where we:
Make sure to replace
<APP_NAME>
with your Heroku app's name.
Commit, add the GitLab remote, and push your code up to GitLab to trigger a new pipeline. This will run the deploy
stage as a single job. Once complete, the code should be deployed to Heroku.
Rather than just building the Docker image and creating a release on GitLab CI, let's also run the Django tests, Flake8, Black, and isort.
Again, this will vary depending on the approach you used.
Skip this section if you're using the Build Manifest approach.
Update .gitlab-ci.yml like so:
stages:
- build
- test
- deploy
variables:
IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
build:
stage: build
image: docker:stable
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:latest || true
- docker build
--cache-from $IMAGE:latest
--tag $IMAGE:latest
--file ./Dockerfile
"."
- docker push $IMAGE:latest
test:
stage: test
image: $IMAGE:latest
services:
- postgres:latest
variables:
POSTGRES_DB: test
POSTGRES_USER: runner
POSTGRES_PASSWORD: ""
DATABASE_URL: postgresql://runner@postgres:5432/test
script:
- python manage.py test
- flake8 hello_django --max-line-length=100
- black hello_django --check
- isort hello_django --check --profile black
deploy:
stage: deploy
image: docker:stable
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
HEROKU_APP_NAME: <APP_NAME>
HEROKU_REGISTRY_IMAGE: registry.heroku.com/${HEROKU_APP_NAME}/web
script:
- apk add --no-cache curl
- docker login -u _ -p $HEROKU_AUTH_TOKEN registry.heroku.com
- docker pull $HEROKU_REGISTRY_IMAGE || true
- docker build
--cache-from $HEROKU_REGISTRY_IMAGE
--tag $HEROKU_REGISTRY_IMAGE
--file ./Dockerfile
"."
- docker push $HEROKU_REGISTRY_IMAGE
- chmod +x ./release.sh
- ./release.sh
Make sure to replace
<APP_NAME>
with your Heroku app's name.
So, we now have three stages: build
, test
, and deploy
.
In the build
stage, we:
Then, in the test
stage we configure Postgres, set the DATABASE_URL
environment variable, and then run the Django tests, Flake8, Black, and isort using the image that was built in the previous stage.
In the deploy
stage, we:
Add the new dependencies to the requirements file:
# prod
Django==3.2.9
dj-database-url==0.5.0
gunicorn==20.1.0
whitenoise==5.3.0
# dev and test
black==21.11b1
flake8==4.0.1
isort==5.10.1
Before pushing up to GitLab, run the Django tests locally:
$ source env/bin/activate
(env)$ pip install -r requirements.txt
(env)$ python manage.py test
System check identified no issues (0 silenced).
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Ensure Flake8 passes, and then update the source code based on the Black and isort recommendations:
(env)$ flake8 hello_django --max-line-length=100
(env)$ black hello_django
(env)$ isort hello_django --profile black
Commit and push your code yet again. Ensure all stages pass.
Skip this section if you're using the Container Registry approach.
Update .gitlab-ci.yml like so:
stages:
- build
- test
- deploy
variables:
IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
build:
stage: build
image: docker:stable
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:latest || true
- docker build
--cache-from $IMAGE:latest
--tag $IMAGE:latest
--file ./Dockerfile
"."
- docker push $IMAGE:latest
test:
stage: test
image: $IMAGE:latest
services:
- postgres:latest
variables:
POSTGRES_DB: test
POSTGRES_USER: runner
POSTGRES_PASSWORD: ""
DATABASE_URL: postgresql://runner@postgres:5432/test
script:
- python manage.py test
- flake8 hello_django --max-line-length=100
- black hello_django --check
- isort hello_django --check --profile black
deploy:
stage: deploy
variables:
HEROKU_APP_NAME: <APP_NAME>
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_AUTH_TOKEN
Make sure to replace
<APP_NAME>
with your Heroku app's name.
So, we now have three stages: build
, test
, and deploy
.
In the build
stage, we:
Then, in the test
stage we configure Postgres, set the DATABASE_URL
environment variable, and then run the Django tests, Flake8, Black, and isort using the image that was built in the previous stage.
In the deploy
stage, we:
Add the new dependencies to the requirements file:
# prod
Django==3.2.9
dj-database-url==0.5.0
gunicorn==20.1.0
whitenoise==5.3.0
# dev and test
black==21.11b1
flake8==4.0.1
isort==5.10.1
Before pushing up to GitLab, run the Django tests locally:
$ source env/bin/activate
(env)$ pip install -r requirements.txt
(env)$ python manage.py test
System check identified no issues (0 silenced).
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Ensure Flake8 passes, and then update the source code based on the Black and isort recommendations:
(env)$ flake8 hello_django --max-line-length=100
(env)$ black hello_django
(env)$ isort hello_django --profile black
Commit and push your code yet again. Ensure all stages pass.
Finally, update the Dockerfile like so to use a multi-stage build in order to reduce the final image size:
FROM python:3.10-alpine AS build-python
RUN apk update && apk add --virtual build-essential gcc python3-dev musl-dev postgresql-dev
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt .
RUN pip install -r requirements.txt
FROM python:3.10-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG 0
ENV PATH="/opt/venv/bin:$PATH"
COPY --from=build-python /opt/venv /opt/venv
RUN apk update && apk add --virtual build-deps gcc python3-dev musl-dev postgresql-dev
RUN pip install psycopg2-binary
WORKDIR /app
COPY . .
RUN python manage.py collectstatic --noinput
RUN adduser -D myuser
USER myuser
CMD gunicorn hello_django.wsgi:application --bind 0.0.0.0:$PORT
Next, we need to update the GitLab config to take advantage of Docker layer caching.
Skip this section if you're using the Build Manifest approach.
.gitlab-ci.yml:
stages:
- build
- test
- deploy
variables:
IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
HEROKU_APP_NAME: <APP_NAME>
HEROKU_REGISTRY_IMAGE: registry.heroku.com/${HEROKU_APP_NAME}/web
build:
stage: build
image: docker:stable
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:build-python || true
- docker pull $IMAGE:production || true
- docker build
--target build-python
--cache-from $IMAGE:build-python
--tag $IMAGE:build-python
--file ./Dockerfile
"."
- docker build
--cache-from $IMAGE:production
--tag $IMAGE:production
--tag $HEROKU_REGISTRY_IMAGE
--file ./Dockerfile
"."
- docker push $IMAGE:build-python
- docker push $IMAGE:production
test:
stage: test
image: $IMAGE:production
services:
- postgres:latest
variables:
POSTGRES_DB: test
POSTGRES_USER: runner
POSTGRES_PASSWORD: ""
DATABASE_URL: postgresql://runner@postgres:5432/test
script:
- python manage.py test
- flake8 hello_django --max-line-length=100
- black hello_django --check
- isort hello_django --check --profile black
deploy:
stage: deploy
image: docker:stable
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- apk add --no-cache curl
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:build-python || true
- docker pull $IMAGE:production || true
- docker build
--target build-python
--cache-from $IMAGE:build-python
--tag $IMAGE:build-python
--file ./Dockerfile
"."
- docker build
--cache-from $IMAGE:production
--tag $IMAGE:production
--tag $HEROKU_REGISTRY_IMAGE
--file ./Dockerfile
"."
- docker push $IMAGE:build-python
- docker push $IMAGE:production
- docker login -u _ -p $HEROKU_AUTH_TOKEN registry.heroku.com
- docker push $HEROKU_REGISTRY_IMAGE
- chmod +x ./release.sh
- ./release.sh
Make sure to replace
<APP_NAME>
with your Heroku app's name.
Review the changes on your own. Then, test it out one last time.
For more on this caching pattern, review the "Multi-stage" section from the Faster CI Builds with Docker Cache article.
Skip this section if you're using the Container Registry approach.
.gitlab-ci.yml:
stages:
- build
- test
- deploy
variables:
IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
HEROKU_APP_NAME: <APP_NAME>
build:
stage: build
image: docker:stable
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:build-python || true
- docker pull $IMAGE:production || true
- docker build
--target build-python
--cache-from $IMAGE:build-python
--tag $IMAGE:build-python
--file ./Dockerfile
"."
- docker build
--cache-from $IMAGE:production
--tag $IMAGE:production
--file ./Dockerfile
"."
- docker push $IMAGE:build-python
- docker push $IMAGE:production
test:
stage: test
image: $IMAGE:production
services:
- postgres:latest
variables:
POSTGRES_DB: test
POSTGRES_USER: runner
POSTGRES_PASSWORD: ""
DATABASE_URL: postgresql://runner@postgres:5432/test
script:
- python manage.py test
- flake8 hello_django --max-line-length=100
- black hello_django --check
- isort hello_django --check --profile black
deploy:
stage: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_AUTH_TOKEN
Make sure to replace
<APP_NAME>
with your Heroku app's name.
Review the changes on your own. Then, test it out one last time.
For more on this caching pattern, review the "Multi-stage" section from the Faster CI Builds with Docker Cache article.
In this article, we walked through two approaches for deploying a Django app to Heroku with Docker -- the Container Registry and Build Manifest.
So, when should you think about using the Heroku Container Runtime over the traditional Git and slug compiler for deployments?
When you need more control over the production deployment environment.
Examples:
--
You can find the code in the following repositories on GitLab:
Best!
Original article source at: https://testdriven.io/
1668499467
Heroku changed how developers build and deploy software, making building, deploying, and scaling applications easier and faster. They set various standards and methodologies -- namely, the Twelve-Factor App -- for how cloud services should be managed, which are still highly relevant today for microservices-based and cloud-native applications. Unfortunately, starting November 28, 2022, Heroku will discontinue its free tier. This means you'll no longer be able to leverage free dynos, Postgres databases, and Redis instances.
For more on Heroku's discontinuation of its free product tiers, check out Heroku's Next Chapter and Deprecation of Heroku Free Resources.
In this article, you'll learn what the best Heroku alternatives (and their pros and cons) are.
Heroku, which was founded in 2007, is a cloud Platform as a Service (PaaS) that provides hosting for web applications. They offer abstracted environments where you don't have to manage the underlying infrastructure, making it easy to manage, deploy, and scale web applications. With just a few clicks you can have your app up and running, ready to receive traffic.
Before Heroku hit the scene, the process of running a web application was quite challenging, mostly reserved for seasoned SysOps professionals rather than developers. Heroku provides an opinionated layer, abstracting away much of the configuration required for a web server. The majority of web applications could (and still can) leverage such an environment, so smaller companies and teams can focus on application development rather than configuring web servers, installing Linux packages, setting up load balancers, and everything else that goes along with infrastructure management on a traditional server.
Despite Heroku's popularity, it has received quite a lot of criticism throughout the years.
If you're already familiar with Heroku, feel free to skip this section.
Heroku is arguably the most user-friendly PaaS platform. Rather than spending days setting up and configuring web servers and the underlying infrastructure, you simply define the commands required to run your web application and Heroku does the rest for you. You can literally have your app up and running in minutes!
Plus, Heroku leverages git for versioning and deploying apps, which makes it easy to deploy and roll back.
Finally, unlike most PaaS platforms, Heroku provides excellent error logs, making debugging relatively easy.
For the first five years of its existence, Heroku had few competitors. Their user/developer experience was just so far ahead of everyone else that it took a while for companies to adapt. This, coupled with their vast free tier, meant that the majority of developer-focused tutorials used Heroku for their deployment platform. Even to this day, the vast majority of web development tutorials, books, and courses still leverage Heroku for deployment.
Heroku also has first-class support for some of the most popular languages and runtimes (via buildpacks) like Python, Ruby, Node.js, PHP, Go, Java, Scala, and Clojure. While those are the officially supported languages, you can still bring your own language or custom runtime to the Heroku platform.
Often overlooked, Heroku provides access to hundreds of add-on tools and services -- everything from data storage and caching to monitoring and analytics to data and video processing. With a click of a button, you can extend your app by provisioning a third-party cloud service all without having to manually install or configure it.
Heroku allows developers to easily scale their apps both vertically and horizontally. Scaling can be achieved via Heroku's dashboard or CLI. Additionally, if you're running more performant dynos you can leverage the free auto-scaling feature, which increases the number of web dynos depending on the current traffic.
Heroku is rather expensive compared to other PaaS on the market. While their starting plan is $7 per dyno per month, as your app scales, you're quickly going to have to upgrade to better dynos, which cost quite a lot of money. Due to the price of more performant dynos, Heroku might not be appropriate for large, high-traffic apps.
Heroku compared to AWS EC2 is about five times more expensive.
Keep in mind, though, that Heroku is a PaaS that does a lot of the heavy lifting for you, while EC2 is just a Linux instance that you have to manage yourself.
Heroku doesn't offer enough control and lacks transparency. By using their service, you're going to be highly dependent on their tech stack and design decisions. Some of their limitations hinder scalability -- e.g., an application can only listen on a single port, functions have a max source code size of 500 MB, and there's no way to fine-tune your database. Heroku is also highly dependent on AWS, which means that if an AWS region is down, your service (hosted in that region) is also going to be down.
Similarly, Heroku is really designed for your run-of-the-mill RESTful APIs. If your app includes heavy computing or you need to tweak the infrastructure to meet your specific needs, Heroku may not be a good fit.
Heroku offers two types of runtimes:
The Common Runtime only supports two regions, US and EU, while the Private Spaces Runtime supports 6 regions.
This means that if you're not an enterprise user you'll only be able to host your app in the US (Virginia) or EU region (Dublin, Ireland).
$ heroku regions
ID Location Runtime
───────── ─────────────────────── ──────────────
eu Europe Common Runtime
us United States Common Runtime
dublin Dublin, Ireland Private Spaces
frankfurt Frankfurt, Germany Private Spaces
oregon Oregon, United States Private Spaces
sydney Sydney, Australia Private Spaces
tokyo Tokyo, Japan Private Spaces
virginia Virginia, United States Private Spaces
In today's world, development trends change faster than ever. This forces hosting services to follow the trends to attract teams looking for cutting-edge technology. Many of Heroku's competitors, which we'll address here shortly, are advancing and adding new features like serverless, edge computing, etc. Heroku, on the other hand, has embraced stability over feature development. This doesn't mean they aren't adding new features; they are just adding new features much slower than some of their competitors.
If you want to see what's coming next to Heroku, take a look at their roadmap.
Once you're running production code on Heroku it's difficult to migrate to a different hosting provider.
Keep in mind that if you move away from a PaaS, you'll have to handle all the things that Heroku handled yourself, so be prepared to make a SysAdmin or DevOps hire or two.
In this section, we'll look at Heroku's core features so you can understand what to look for as you look for alternatives.
Again, feel free to skip this section if you're already familiar with Heroku's features.
Feature | Description |
---|---|
Heroku Runtime | The Heroku Runtime is responsible for provisioning and orchestrating dynos, managing and monitoring the lifecycle of your dynos, providing proper network configuration, HTTP routing, log aggregation, and much more. |
CI/CD system | Easy-to-use CI/CD, which takes care of building, testing, deploying, incremental app updates, and more. |
git-based deployments | Manages app deployments with git. |
Data persistance | Fully-managed data services, like Postgres, Redis, and Apache Kafka. |
Scaling features | Easy-to-use tools that enable developers to scale horizontally and vertically on demand. |
Logging and app metrics | Logging, monitoring, and application metrics. |
Collaboration features | Easy collaboration with others. Collaborators can deploy changes to your apps, scale them, and access their data, among other operations. |
Add-ons | Hundreds of add-on tools and services – everything from data storage and caching to monitoring and analytics to data and video processing. |
When looking for alternatives, you should prioritize the features. You're simply not going to find a 1:1 replacement for Heroku, so be sure to determine which features are "must-haves" vs "nice-to-haves".
For example:
Must-haves
Nice-to-haves
Finally, in this section, we'll look at the best Heroku alternatives and what their pros and cons are.
App Platform is DigitalOcean's fully managed solution for deploying apps to the cloud. It has integrated CI/CD, which works well with both GitHub and GitLab. It natively supports popular languages and frameworks like Python, Node.js, Django, Go, and PHP. Alternatively, it allows you to deploy apps via Docker.
Other important features:
The platform's UI/UX is simple and straightforward, providing a similar feel to Heroku.
DigitalOcean App Platform starts at $5/month for 1 CPU and 512 MB of RAM. To learn more about their pricing take a look at the official pricing page.
Want to learn how to deploy a Django application to DigitalOcean's App Platform? Check out Running Django on DigitalOcean's App Platform.
Render, which launched in 2019, is a great alternative to Heroku. It allows you to host static sites, web services, PostgreSQL databases, and Redis instances for absolutely free. Its extremely simple UI/UX and great git integration allow you to get an app running in minutes. It has native support for Python, Node.js, Ruby, Elixir, Go, and Rust. If none of these work for you, Render can also deploy via a Dockerfile.
Render's free auto-scaling feature will make sure that your app will always have the necessary resources at the right cost. Additionally, everything that's hosted on Render can also get a free TLS certificate.
Refer to their official documentation for more information about their free plans.
Want to learn how to deploy a Flask application to Render? Check out Deploying a Flask App to Render.
Fly.io is a popular, flexible PaaS. Rather than reselling AWS or GCP services, they host your applications on top of physical dedicated servers that run all over the world. Because of that, they're able to offer cheaper hosting than other PaaS, like Heroku. Their main focus is to deploy apps as close to their customers as possible (you can pick between 22 regions). Fly.io supports three kinds of builders: Dockerfile, buildpacks, or pre-built Docker images.
They also offer scaling and auto-scaling features.
Fly.io takes a different approach to managing your resources compared to other PaaS. It doesn't come with a fancy management dashboard; instead, all the work is done via their CLI named flyctl.
Their free plan includes:
That should be more than enough to run a few small apps to test their platform.
Want to learn how to deploy a Django application on Fly.io? Check out Deploying a Django App to Fly.io.
Google App Engine (GAE) is a fully managed, serverless platform for developing and hosting web applications at scale. It has a powerful built-in auto-scaling feature, which automatically allocates more/fewer resources based on demand. GAE natively supports applications written in Python, Node.js, Java, Ruby, C#, Go, and PHP. Alternatively, it provides support for other languages via custom runtimes or Dockerfiles.
It has powerful application diagnostics, which you can combine with Cloud Monitoring and Logging to monitor the health and the performance of your app.
Google offers $300 free credits for new customers, which can serve small apps for several years.
Platform.sh is a platform-as-a-service built especially for continuous deployment. It allows you to host web applications on the cloud while making your development and testing workflows more productive. It has direct integration with GitHub, which allows developers to instantly deploy from GitHub repositories. It supports modern development languages, like Python, Java, PHP, and Go, as well as a number of different frameworks.
Platform.sh does not offer a free plan. Their developer plan (which isn't suitable for production) starts at $10/month. Their production-ready plans start at $50 monthly.
AWS Elastic Beanstalk (EB) is an easy-to-use service for deploying and scaling web applications. It connects multiple AWS services, like compute instances (EC2), databases (RDS), load balancers (Application Load Balancer), and file storage systems (S3), to name a few. EB allows you to quickly deploy apps written in Python, Go, Java, .Net, Node.js, PHP, and Ruby. It also supports Docker.
Elastic Beanstalk makes app deployment easier by abstracting away the underlying architecture, while still allowing low-level configuration of instances and databases. It integrates well with git and allows you to make incremental deployments. It also supports load balancing and auto-scaling.
The great thing about Elastic Beanstalk is that there's no additional charge for it. You only pay for the resources that your application consumes (EC2 instances, RDS, etc.).
Want to learn how to deploy an application to Elastic Beanstalk? Check out our tutorials:
Azure App Service allows you to quickly and easily create enterprise-ready web and mobile apps for any platform or device and deploy them on scalable and reliable cloud infrastructure. It natively supports Python, .NET, .NET Core, Node.js, Java, PHP, and containers. They have built-in CI/CD and zero downtime deployments.
Other important features:
If you're a new customer you can get $200 free credit to test Azure.
Dokku claims to be the smallest PaaS implementation you've ever seen. It allows you to build and manage the lifecycle of applications from building to scaling. It's basically a mini-Heroku you can self-host on your Linux machine. Dokku is powered by Docker and integrates well with git.
Dokku offers a premium plan called Dokku PRO, which comes with a user-friendly interface and other features. You can learn more about it on their official website.
Dokku's minimal system requirement is 1 GB of memory. This means that you can host it on a DigitalOcean Droplet for $6 per month.
Want to learn how to deploy a Django application on Dokku? Check out Deploying a Django App to Dokku on a DigitalOcean Droplet.
PythonAnywhere is an online integrated development environment (IDE) and a web hosting service (PaaS) based on the Python programming language. It has out-of-the-box deployment options for Django, web2py, Flask, and Bottle. Compared to other PaaS on the list, PythonAnywhere behaves more like a traditional web server. You have access to its file system and can SSH into the console to view logs and what not.
It offers a free plan, which is great for beginners or people who'd just like to test different Python frameworks. The free plan allows you to host one web app at your_username.pythonanywhere.com
. You can also use the free plan to spin up a MySQL instance.
Other relatively cheap paid plans that can be seen on their pricing page.
Engine Yard is a PaaS solution allowing developers to plan, build, deploy, and manage applications in the cloud. Engine Yard also provides services for deployment, managing AWS, supporting databases, and microservices container development. Its main focus is Ruby on Rails, but it also supports other languages like Python, PHP, and Node.js.
Engine Yard simplifies app management on the cloud by automating stack updates and security patches to the hosted environment. It’s also possible to scale resources for your apps via application metrics.
Vercel is a cloud platform for static sites and serverless functions. It's mostly used for front-end projects, but it also supports Python, Node.js, Ruby, Go, and Docker. Vercel enables developers to host websites and web services that deploy instantly, scale automatically, and require little supervision -- all with no configuration. It also has a beautiful and intuitive UI.
Vercel offers a free plan, which includes:
Netlify is a cloud-based development platform for web developers and businesses. It allows developers to host static sites and serverless functions. It supports Python, Node.js, Go, PHP, Ruby, Rust, and Swift. It's undoubtedly one of the most used hosting platforms for front-end projects.
Netlify has an intuitive UI and is extremely easy to use because it doesn't require any configuration.
Its free plan includes:
Railway.app is a lesser-known infrastructure platform that allows you to provision infrastructure, develop with that infrastructure locally, and then deploy it to the cloud. It's made for every language no matter the project size.
Its features include:
OpenShift is Red Hat's cloud computing PaaS offering. It's an application platform built on top of Kubernetes in the cloud where application developers and teams can build, test, deploy, and run their applications.
OpenShift has a seamless DevOps workflow, can scale both horizontally and vertically, and can auto-scale.
Appliku is a PaaS platform that uses your cloud servers to deploy your apps. You can link your DigitalOcean or AWS account and provision servers through Appliku's dashboard. While their primary focus is on Python-based apps, you can deploy apps built in any language by leveraging Docker. Appliku's pricing is based on the number of managed servers, so you can deploy as many apps as you need. They do offer a free tier.
Heroku is a mature, battle-tested, and stable platform. It does a lot of heavy lifting for you and will save you a lot of time and money, especially for small teams. Heroku allows you to focus on your product instead of fiddling with your server's configuration options and hiring a DevOps engineer or SysAdmin.
It may not be the cheapest option, but it's still one of the best PaaS on the market. Because of that, if you're already using Heroku, you should have a strong reason to move away from it.
While there are a number of alternatives on the market, none of them match Heroku's developer experience. At the moment the most promising alternatives to Heroku are DigitalOcean App Platform and Render. The problem with these two platforms is that they are relatively new and not (yet) battle-tested. If you're just looking for a place to host your apps for free, go with Render.
Original article source at: https://testdriven.io/
1667916960
Papercups is an open source live customer support tool web app written in Elixir. We offer a hosted version at app.papercups.io.
You can check out how our chat widget looks and play around with customizing it on our demo page. The chat widget component is also open sourced at github.com/papercups-io/chat-widget.
Watch how easy it is to get set up with our Slack integration 🚀 :
The fastest way to get started is one click deploy on Heroku with:
We wanted to make a self-hosted customer support tool like Zendesk and Intercom for companies that have privacy and security concerns about having customer data going to third party services.
We set up a simple page that demonstrates how Papercups works.
Try sending us a message to see what the chat experience is like!
Check out our blog for more updates and learnings
Check out our docs at docs.papercups.io
We ❤️ contributions big or small. See CONTRIBUTING.md for a guide on how to get started.
⚠️ Maintenance Mode
Papercups is in maintenance mode. This means there won't be any major new features in the near future. We will still accept pull requests and conduct major bug fixes. Read more here
Author: Papercups-io
Source Code: https://github.com/papercups-io/papercups
License: MIT license
1667692980
Up deploys infinitely scalable serverless apps, APIs, and static websites in seconds, so you can get back to working on what makes your product unique.
With Up there's no need to worry about managing or scaling machines, paying for idle servers, worrying about logging infrastructure or alerting. Just deploy your app with $ up
and you're done!
Use the free OSS version, or subscribe to Up Pro for a small monthly fee for unlimited use within your company, there is no additional cost per team-member or application. Deploy dozens or even hundreds of applications for pennies thanks to AWS Lambda's cost effective nature.
Up focuses on deploying "vanilla" HTTP servers so there's nothing new to learn, just develop with your favorite existing frameworks such as Express, Koa, Django, Golang net/http or others.
Up currently supports Node.js, Golang, Python, Java, Crystal, Clojure and static sites out of the box. Up is platform-agnostic, supporting AWS Lambda and API Gateway as the first targets. You can think of Up as self-hosted Heroku style user experience for a fraction of the price, with the security, isolation, flexibility, and scalability of AWS.
Check out the documentation for more instructions and links, or try one of the examples, or chat with us in Slack.
Features of the free open-source edition.
Up Pro provides additional features for production-ready applications such as encrypted environment variables, error alerting, unlimited team members, unlimited applications, priority email support, and global deployments for $19.99/mo USD. Visit Subscribing to Up Pro to get started.
Install Up:
$ curl -sf https://up.apex.sh/install | sh
Create an app.js
file:
require('http').createServer((req, res) => { res.end('Hello World\n') }).listen(process.env.PORT)
Deploy the app:
$ up
Open it in the browser, or copy the url to your clipboard:
$ up url -o
$ up url -c
Author: Apex
Source Code: https://github.com/apex/up
License: MIT license
1667457144
Learn how to deploy a Rust web server using Axum, Tokio, and GitHub Actions to Heroku for your projects.
axum
is an async web framework from the Tokio project. It is designed to be a very thin layer over hyper and is compatible with the Tower ecosystem, allowing the use of various middleware provided by tower-http
and tower-web
.
In this post, we will walk through how you can deploy a Rust web server using axum
, Tokio, and GitHub Actions to Heroku for your projects.
Jump ahead:
axum
axum
provides a user-friendly interface to mount routes on a server and pass handler functions.
axum
will handle listening to TCP sockets for connections and multiplexing HTTP requests to the correct handler and, as I mentioned, also allows the use of various middleware provided by the aforementioned Tower ecosystem.
use std::{net::SocketAddr, str::FromStr};
use axum::{
http::StatusCode,
response::IntoResponse,
routing::get,
Router,
Server,
};
// running the top level future using tokio main
#[tokio::main]
async fn main() {
// start the server
run_server().await;
}
async fn run_server() {
// Router is provided by Axum which allows mounting various routes and handlers.
let app = Router::new()
// `route` takes `/` and MethodRouter
.route("/",
// get function create a MethodRouter for a `/` path from the `hello_world`
get(hello_world))
// create a socket address from the string address
let addr = SocketAddr::from_str("0.0.0.0:8080").unwrap();
// start the server on the address
// Server is a re-export from the hyper::Server
Server::bind(&addr)
// start handling the request using this service
.serve(app.into_make_service())
// start polling the future
.await
.unwrap();
}
// basic handler that responds with a static string
// Handler function is an async function whose return type is anything that impl IntoResponse
async fn hello_world() -> impl IntoResponse {
// returning a tuple with HTTP status and the body
(StatusCode::OK, "hello world!")
}
Here, the Router
struct provides a route
method to add new routes and respective handlers. In the above example, get
is used to create a get handler for the /
route.
hello_world
is a handler which returns a tuple with the HTTP status and body. This tuple has an implementation for the IntoResponse
trait provided by axum
.
The Server
struct is a re-export of the hyper::Server
. As axum
attempts to be a very thin wrapper around hyper, you can expect it to provide performance comparable to hyper
.
The post
function is used to create a POST route on the provided path — as with the get
function, post
also takes a handler and returns MethodRoute
.
let app = Router::new()
// `route` takes `/` and MethodRouter
.route("/",
// post function create a MethodRouter for a `/` path from the `hello_name`
post(hello_name))
axum
provides JSON serializing and deserializing right out of the box. The Json
type implements both FromRequest
and IntoResponse
traits, allowing you to serialize responses and deserialize the request body.
// the input to our `hello_name` handler
// Deserialize trait is required for deserialising bytes to the struct
#[derive(Deserialize)]
struct Request {
name: String,
}
// the output to our `hello_name` handler
// Serialize trait is required for serialising struct in bytes
#[derive(Serialize)]
struct Response{
greet:String
}
The Request
struct implements the Deserialize
trait used by serde_json
to deserialize the request body, while the Response
struct implements the Serialize
trait to serialize the response.
async fn hello_name(
// this argument tells axum to parse the request body
// as JSON into a `Request` type
Json(payload): Json<Request>
) -> impl IntoResponse {
// insert your application logic here
let user = Response {
greet:format!("hello {}",payload.name)
};
(StatusCode::CREATED, Json(user))
}
Json
is a type provided by axum
that internally implements the FromRequest
trait and uses the serde
and serde_json
crate to deserialize the JSON body in the request to the Request
struct.
Similar to the GET request handler, the POST handler can also return a tuple with the response status code and response body. Json
also implements the IntoResponse
trait, allowing it to convert the Response
struct into a JSON response.
Axum provides extractors as an abstraction to share state across your server and allows access of shared data to handlers.
// creating common state
let app_state = Arc::new(Mutex::new(HashMap::<String,()>::new()));
let app = Router::new()
// `GET /` goes to `root`
.route("/", get(root))
// `POST /users` goes to `create_user`
.route("/hello", post(hello_name))
// Adding the state to the router.
.layer(Extension(app_state));
Extension
wraps the shared state and is responsible for interacting with axum
. In the above example, the shared state is wrapped in Arc
and Mutex
to synchronize the access to the inner state.
async fn hello_name(
Json(payload): Json<Request>,
// This will extract out the shared state
Extension(db):Extension<Arc<Mutex<HashMap<String,()>>>>
) -> impl IntoResponse {
let user = Response {
greet:format!("hello {}",payload.name)
};
// we can use the shared state
let mut s=db.lock().unwrap();
s.insert(payload.name.clone(), ());
(StatusCode::CREATED, Json(user))
}
Extension
also implements the FromRequest
trait that will be called by the axum
to extract the shared state from the request and pass it to the handler functions.
GitHub Actions can be used to test, build, and deploy Rust applications. In this section, we will focus on deploying and testing Rust applications.
# name of the workflow
name: Rust
# run workflow when the condition is met
on:
# run when code is pushed on the `main` branch
push:
branches: [ "main" ]
# run when a pull request to the `main` branch
pull_request:
branches: [ "main" ]
# env variables
env:
CARGO_TERM_COLOR: always
# jobs
jobs:
# job name
build:
# os to run the job on support macOS and windows also
runs-on: ubuntu-latest
# steps for job
steps:
# this will get the code and set the git
- uses: actions/checkout@v3
# run the build
- name: Build
# using cargo to build
run: cargo build --release
# for deployment
- name: make dir
# create a directory
run: mkdir app
# put the app in it
- name: copy
run: mv ./target/release/axum-deom ./app/axum
# heroku deployment
- uses: akhileshns/heroku-deploy@v3.12.12
with:
# key from repository secrets
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
# name of the Heroku app
heroku_app_name: "axum-demo-try2"
# email from which the app is uploaded
heroku_email: "anshulgoel151999@gmail.com"
# app directory
appdir: "./app"
# start command
procfile: "web: ./axum"
# buildpack is like environment used to run the app
buildpack: "https://github.com/ph3nx/heroku-binary-buildpack.git"
GitHub Actions provide support to stable versions of Rust by default. Cargo and rustc are installed by default on all supported operating systems by GitHub Actions — this is an action that run when the code is pushed to the main branch or when a pull request to the main branch is created.
on:
# run when code is pushed on the `main` branch
push:
branches: [ "main" ]
# run when a pull request to the `main` branch
pull_request:
branches: [ "main" ]
The workflow will first check the code, and then run the Cargo test to run the test on the code. It will then build the code using cargo-build.
The Cargo release will create a binary in the target folder, and the Action then copies the binary from the target folder to the ./app
folder for further use in the Heroku deployment step, which we will now proceed to.
Heroku doesn’t have an official buildpack for Rust, so there’s no official build environment for Rust apps with Heroku.
So instead, we will use GitHub Actions to build the app and deploy it to Heroku.
Heroku requires having a buildpack for each app, so binary-buildpack is used for Rust apps. There are community buildpacks for Rust, and since GitHub Actions are already being used to build the app, time can be saved by directly using the binary build on Heroku.
The GitHub Actions market has a very useful akhileshns/heroku-deploy
that deploys the Heroku app using GitHub Actions. In combination with binary-buildpack
, it becomes a powerful tool to deploy code.
- uses: akhileshns/heroku-deploy@v3.12.12
with:
# key from repository secrets
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
# name of the Heroku app
heroku_app_name: "axum-demo-try2"
# email from which the app is uploaded
heroku_email: "anshulgoel151999@gmail.com"
# app directory
appdir: "./app"
# start command
procfile: "web: ./axum"
# buildpack is like environment used to run the app
buildpack: "https://github.com/ph3nx/heroku-binary-buildpack.git"
To use this Action, a Heroku API key is needed. The key can be generated using the Heroku console in your account settings.
This action will create the app and deploy it for you. It takes the directory of the app and starts the command for the app, and you can also specify the buildpack you’d like to use.
Some code changes are required before the Rust app can be deployed to Heroku. Currently, the app uses an 8080
port, but Heroku will provide a different port for the app to use, so the Rust app should read the environment variable PORT
.
// read the port from env or use the port default port(8080)
let port = std::env::var("PORT").unwrap_or(String::from("8080"));
// convert the port to a socket address
let addr = SocketAddr::from_str(&format!("0.0.0.0:{}", port)).unwrap();
// listen on the port
Server::bind(&addr)
.serve(app.into_make_service())
.await
.unwrap();
axum
is a very good web server framework with support for the wider tower-rs ecosystem. It allows the building of extensible and composable web services and offers performance benefits by offering a thin layer over hyper
.
GitHub Actions are great for CI/CD and allow for performing various automated tasks, such as building and testing code and generating docs on various platforms. GitHub Actions also support caching cargo dependencies to speed up Actions.
Heroku comes with support to autoscale continuous deployment, as well as support for hosted resources like databases and storage, for example. GitHub Actions and Heroku are independent of the framework, meaning the same action can test and deploy a web server written in Rocket or Actix Web — so feel free to experiment with whatever suits you!
When all of these tools are used together, they become a killer combo for developing and hosting Rust web servers. I hope you enjoyed following along with this tutorial — leave a comment about your experience below.
Original article source at https://blog.logrocket.com
#rust #heroku #axum #tokio #githubactions