A build system for Dart code generation and modular compilation

Standalone generator and watcher for Dart using package:build.

The build_runner package provides a concrete way of generating files using Dart code, outside of tools like pub. Unlike pub serve/build, files are always generated directly on disk, and rebuilds are incremental - inspired by tools such as Bazel.

NOTE: Are you a user of this package? You may be interested in simplified user-facing documentation, such as our getting started guide.

Installation

This package is intended to support development of Dart projects with package:build. In general, put it under dev_dependencies, in your pubspec.yaml.

dev_dependencies:
  build_runner:

Usage

When the packages providing Builders are configured with a build.yaml file they are designed to be consumed using an generated build script. Most builders should need little or no configuration, see the documentation provided with the Builder to decide whether the build needs to be customized. If it does you may also provide a build.yaml with the configuration. See the package:build_config README for more information on this file.

To have web code compiled to js add a dev_dependency on build_web_compilers.

Built-in Commands

The build_runner package exposes a binary by the same name, which can be invoked using dart run build_runner <command>.

The available commands are build, watch, serve, and test.

  • build: Runs a single build and exits.
  • watch: Runs a persistent build server that watches the files system for edits and does rebuilds as necessary.
  • serve: Same as watch, but runs a development server as well.
    • By default this serves the web and test directories, on port 8080 and 8081 respectively. See below for how to configure this.
  • test: Runs a single build, creates a merged output directory, and then runs dart run test --precompiled <merged-output-dir>. See below for instructions on passing custom args to the test command.

Command Line Options

All the above commands support the following arguments:

  • --help: Print usage information for the command.
  • --delete-conflicting-outputs: Assume conflicting outputs in the users package are from previous builds, and skip the user prompt that would usually be provided.
  • --[no-]fail-on-severe: Whether to consider the build a failure on an error logged. By default this is false.

Some commands also have additional options:

serve

  • --hostname: The host to run the server on.
  • --live-reload: Enables automatic page reloading on rebuilds.

Trailing args of the form <directory>:<port> are supported to customize what directories are served, and on what ports.

For example to serve the example and web directories on ports 8000 and 8001 you would do dart run build_runner serve example:8000 web:8001.

test

The test command will forward any arguments after an empty -- arg to the dart run test command.

For example if you wanted to pass -p chrome you would do dart run build_runner test -- -p chrome.

Inputs

Valid inputs follow the general dart package rules. You can read any files under the top level lib folder any package dependency, and you can read all files from the current package.

In general it is best to be as specific as possible with your InputSets, because all matching files will be checked against a Builder's buildExtensions - see outputs for more information.

Outputs

  • You may output files anywhere in the current package.

NOTE: When a BuilderApplication specifies hideOutput: true it may output under the lib folder of any package you depend on.

  • Builders are not allowed to overwrite existing files, only create new ones.
  • Outputs from previous builds will not be treated as inputs to later ones.
  • You may use a previous BuilderApplications's outputs as an input to a later action.

Source control

This package creates a top level .dart_tool folder in your package, which should not be submitted to your source control repository. You can see our own .gitignore as an example.

# Files generated by dart tools
.dart_tool

When it comes to generated files it is generally best to not submit them to source control, but a specific Builder may provide a recommendation otherwise.

It should be noted that if you do submit generated files to your repo then when you change branches or merge in changes you may get a warning on your next build about declared outputs that already exist. This will be followed up with a prompt to delete those files. You can type l to list the files, and then type y to delete them if everything looks correct. If you think something is wrong you can type n to abandon the build without taking any action.

Publishing packages

In general generated files should be published with your package, but this may not always be the case. Some Builders may provide a recommendation for this as well.

Legacy Usage

If the generated script does not do everything you need it's possible to manually write one. With this approach every package which uses a Builder must have it's own script, they cannot be reused from other packages. A package which defines a Builder may have an example you can reference, but a unique script must be written for the consuming packages as well. You can reference the generated script at .dart_tool/build/entrypoint/build.dart for an example.

Your script should the run functions defined in this library.

Configuring

run has a required parameter which is a List<BuilderApplication>. These correspond to the BuilderDefinition class from package:build_config. See apply and applyToRoot to create instances of this class. These will be translated into actions by crawling through dependencies. The order of this list is important. Each Builder may read the generated outputs of any Builder that ran on a package earlier in the dependency graph, but for the package it is running on it may only read the generated outputs from Builders earlier in the list of BuilderApplications.

NOTE: Any time you change your build script (or any of its dependencies), the next build will be a full rebuild. This is because the system has no way of knowing how that change may have affected the outputs.

Contributing

We welcome a diverse set of contributions, including, but not limited to:

For the stability of the API and existing users, consider opening an issue first before implementing a large new feature or breaking an API. For smaller changes (like documentation, minor bug fixes), just send a pull request.

Testing

All pull requests are validated against CI, and must pass. The build_runner package lives in a mono repository with other build packages, and all of the following checks must pass for each package.

Ensure code passes all our analyzer checks:

$ dartanalyzer .

Ensure all code is formatted with the latest dev-channel SDK.

$ dartfmt -w .

Run all of our unit tests:

$ dart run test

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add build_runner --dev

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dev_dependencies:
  build_runner: ^2.3.3

Alternatively, your editor might support dart pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:build_runner/build_runner.dart';

example/README.md

Examples can be found in the top level of the build repository here.

You can also find more general documentation in the docs directory.

Download Details:

Author: tools.dart.dev

Source Code: https://github.com/dart-lang/build/tree/master/build_runner

#flutter #dart #generating #compile #build  

A build system for Dart code generation and modular compilation

Running_page: Make Your Own Running Home Page

Running page

Create a personal running home page 

demo

English | 简体中文 | Wiki

Runner's Page Show

Running page runners

RunnerpageApp
zhubao315https://zhubao315.github.io/runningStrava
shaonianchehttps://run.duanfei.orgStrava
yihong0618https://yihong.runNike
superleeyomhttps://running.leeyom.topNike
geekpluxhttps://activities.geekplux.comNike
guanlanhttps://grun.vercel.appStrava
tuzimoehttps://run.tuzi.moeNike
ben_29https://running.ben29.xyzStrava
kcllfhttps://running-tau.vercel.appGarmin-cn
mqhttps://running-iota.vercel.appKeep
zhaohongxuanhttps://running-page-psi.vercel.appKeep
yvetterowehttps://run.haoluo.ioStrava
love-exercisehttps://run.kai666666.topKeep
zstone12https://running-page.zstone12.vercel.appKeep
Laxhttps://lax.github.io/runningKeep
lusuzihttps://running.lusuzi.vercel.appNike
wh1994https://run4life.funGarmin
liuyihuihttps://run.foolishfox.cnKeep
sunyunxianhttps://sunyunxian.github.io/running_pageStrava
AhianZhanghttps://running.ahianzhang.comNike
L1cardohttps://run.licardo.cnNike
luckylele666https://0000928.xyzStrava
MFYDevhttps://mfydev.runGarmin-cn
Eishedhttps://run.iknow.funKeep
Liuxinhttps://liuxin.runNike
loucxhttps://loucx.github.io/runningNike
winf42https://winf42.github.ioGarmin-cn
sun0225SUNhttps://run.sunguoqi.comNike
Zhanhttps://run.zlog.inNike
Dennishttps://run.domon.cnGarmin-cn
hanpeihttps://running.nexts.topGarmin-cn
liugezhouhttps://run.liugezhou.onlineStrava
Jason Tanhttps://jason-cqtan.github.io/running_pageNike
Congehttps://conge.github.io/running_pageStrava
zHElEARNhttps://workouts.zhelearn.comStrava
Ym9ihttps://bobrun.vercel.app/Strava
jianchengwanghttps://jianchengwang.github.io/running_pageSuunto
fxbinhttps://fxbin.github.io/sport-records/Keep
shensl4499https://waner.runcodoon

How it works

image

Features

  1. GitHub Actions manages automatic synchronization of runs and generation of new pages.
  2. Gatsby-generated static pages, fast
  3. Support for Vercel (recommended) and GitHub Pages automated deployment
  4. React Hooks
  5. Mapbox for map display
  6. Supports most sports apps such as nike strava...

automatically backup gpx data for easy backup and uploading to other software.
Note: If you don't want to make the data public, you can choose strava's fuzzy processing, or private repositories.

Download

Clone or fork the repo.

git clone https://github.com/yihong0618/running_page.git --depth=1

Installation and testing (node >= 12 and <= 14 python >= 3.7)

pip3 install -r requirements.txt
yarn install
yarn develop

Open your browser and visit http://localhost:8000/

Docker

#build
# NRC
docker build -t running_page:latest . --build-arg app=NRC --build-arg nike_refresh_token=""
# Garmin
docker build -t running_page:latest . --build-arg app=Garmin --build-arg email=""  --build-arg password="" 
# Garmin-CN
docker build -t running_page:latest . --build-arg app=Garmin-CN --build-arg email=""  --build-arg password="" 
# Strava
docker build -t running_page:latest . --build-arg app=Strava --build-arg client_id=""  --build-arg client_secret=""  --build-arg refresch_token="" 
#Nike_to_Strava
docker build -t running_page:latest . --build-arg app=Nike_to_Strava  --build-arg nike_refresh_token="" --build-arg client_id=""  --build-arg client_secret=""  --build-arg refresch_token="" 

#run
docker run -itd -p 80:80   running_page:latest

#visit
Open your browser and visit localhost:80

Local sync data

Modifying Mapbox token in src/utils/const.js

If you use English please change IS_CHINESE = false in src/utils/const.js 
Suggested changes to your own Mapbox token

const MAPBOX_TOKEN =
  'pk.eyJ1IjoieWlob25nMDYxOCIsImEiOiJja2J3M28xbG4wYzl0MzJxZm0ya2Fua2p2In0.PNKfkeQwYuyGOTT_x9BJ4Q';

Custom your page

  • Find gatsby-config.js in the repository directory, find the following content, and change it to what you want.
siteMetadata: {
  siteTitle: 'Running Page', #website title
  siteUrl: 'https://yihong.run', #website url
  logo: 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQTtc69JxHNcmN1ETpMUX4dozAgAN6iPjWalQ&usqp=CAU', #logo img
  description: 'Personal site and blog',
  navLinks: [
    {
      name: 'Blog', #navigation name
      url: 'https://yihong.run/running', #navigation url
    },
    {
      name: 'About',
      url: 'https://github.com/yihong0618/running_page/blob/master/README-CN.md',
    },
  ],
},
  • Modifying styling in src/utils/const.js
// styling: set to `false` if you want to disable dash-line route
const USE_DASH_LINE = true;
// styling: route line opacity: [0, 1]
const LINE_OPACITY = 0.4;

Download your running data and do not forget to generate svg in total page

GPX

Make your GPX data

Copy all your gpx files to GPX_OUT or new gpx files

python3(python) scripts/gpx_sync.py

TCX

Make your TCX data

Copy all your tcx files to TCX_OUT or new tcx files

python3(python) scripts/tcx_sync.py

Garmin

Get your Garmin data
If you only want to sync `type running` add args --only-run If you only want `tcx` files add args --tcx

python3(python) scripts/garmin_sync.py ${your email} ${your password}

example:

python3(python) scripts/garmin_sync.py example@gmail.com example

only-run:

python3(python) scripts/garmin_sync.py example@gmail.com example --only-run

Garmin-CN(China)

Get your Garmin-CN data
If you only want to sync `type running` add args --only-run If you only want `tcx` files add args --tcx

python3(python) scripts/garmin_sync.py ${your email} ${your password} --is-cn

example:

python3(python) scripts/garmin_sync.py example@gmail.com example --is-cn

Nike Run Club

Get your Nike Run Club data

Please note: When you choose to deploy running_page on your own server, due to Nike has blocked some IDC's IP band, maybe your server cannot sync Nike Run Club's data correctly and display 403 error, then you have to change another way to host it.

Get Nike's refresh_token

  1. Login Nike website
  2. In Develop -> Application-> Storage -> https:unite.nike.com look for refresh_token
     

image

  • Execute in the root directory:
python3(python) scripts/nike_sync.py ${nike refresh_token}

example:

python3(python) scripts/nike_sync.py eyJhbGciThiMTItNGIw******

example img

Strava

Get your Strava data

Sign in/Sign up Strava account

Open after successful Signin Strava Developers -> Create & Manage Your App

Create My API Application: Enter the following information

My API Application

Created successfully:

  • Use the link below to request all permissions: Replace ${your_id} in the link with My API Application Client ID
https://www.strava.com/oauth/authorize?client_id=${your_id}&response_type=code&redirect_uri=http://localhost/exchange_token&approval_prompt=force&scope=read_all,profile:read_all,activity:read_all,profile:write,activity:write

get_all_permissions

  • Get the code value in the link

example:

http://localhost/exchange_token?state=&code=1dab37edd9970971fb502c9efdd087f4f3471e6e&scope=read,activity:write,activity:read_all,profile:write,profile:read_all,read_all

code value:

1dab37edd9970971fb502c9efdd087f4f3471e6

get_code

  • Use Client_idClient_secretCode get refresch_token: Execute in Terminal/iTerm
curl -X POST https://www.strava.com/oauth/token \
-F client_id=${Your Client ID} \
-F client_secret=${Your Client Secret} \
-F code=${Your Code} \
-F grant_type=authorization_code

example:

curl -X POST https://www.strava.com/oauth/token \
-F client_id=12345 \
-F client_secret=b21******d0bfb377998ed1ac3b0 \
-F code=d09******b58abface48003 \
-F grant_type=authorization_code

get_refresch_token

  • Sync Strava data
python3(python) scripts/strava_sync.py ${client_id} ${client_secret} ${refresch_token}

References:

TCX_to_Strava

upload all tcx files to strava
 

  1. follow the strava steps
  2. copy all your tcx files to TCX_OUT
  3. Execute in the root directory:
python3(python) scripts/tcx_to_strava_sync.py ${client_id} ${client_secret}  ${strava_refresch_token}

example:

python3(python) scripts/tcx_to_strava_sync.py xxx xxx xxx
or
python3(python) scripts/tcx_to_strava_sync.py xxx xxx xxx --all
  • if you want to all files add args --all

GPX_to_Strava

upload all gpx files to strava
 

  1. follow the strava steps
  2. copy all your gpx files to GPX_OUT
  3. Execute in the root directory:
python3(python) scripts/gpx_to_strava_sync.py ${client_id} ${client_secret}  ${strava_refresch_token}

example:

python3(python) scripts/gpx_to_strava_sync.py xxx xxx xxx
or
python3(python) scripts/tcx_to_strava_sync.py xxx xxx xxx --all
  • if you want to all files add args --all

Nike_to_Strava

Get your Nike Run Club data and upload to strava
 

  1. follow the nike and strava steps
  2. Execute in the root directory:
python3(python) scripts/nike_to_strava_sync.py ${nike_refresh_token} ${client_id} ${client_secret} ${strava_refresch_token}

example:

python3(python) scripts/nike_to_strava_sync.py eyJhbGciThiMTItNGIw******  xxx xxx xxx

Garmin_to_Strava

Get your Garmin data and upload to strava
 

  1. finish garmin and strava setps
  2. Execute in the root directory:
python3(python) scripts/garmin_to_strava_sync.py  ${client_id} ${client_secret} ${strava_refresch_token} ${garmin_email} ${garmin_password} --is-cn

e.g.

python3(python) scripts/garmin_to_strava_sync.py  xxx xxx xxx xx xxx

Strava_to_Garmin

Get your Strava data and upload to Garmin
 

  1. finish garmin and strava setps, at the same time, you need to add additional strava config in Github Actions secret: secrets.STRAVA_EMAILsecrets.STRAVA_PASSWORD
  2. Execute in the root directory:
python3(python) scripts/strava_to_garmin_sync.py ${{ secrets.STRAVA_CLIENT_ID }} ${{ secrets.STRAVA_CLIENT_SECRET }} ${{ secrets.STRAVA_CLIENT_REFRESH_TOKEN }}  ${{ secrets.GARMIN_EMAIL }} ${{ secrets.GARMIN_PASSWORD }} ${{ secrets.STRAVA_EMAIL }} ${{ secrets.STRAVA_PASSWORD }}

if your garmin account region is China, you need to execute the command:

python3(python) scripts/strava_to_garmin_sync.py ${{ secrets.STRAVA_CLIENT_ID }} ${{ secrets.STRAVA_CLIENT_SECRET }} ${{ secrets.STRAVA_CLIENT_REFRESH_TOKEN }}  ${{ secrets.GARMIN_CN_EMAIL }} ${{ secrets.GARMIN_CN_PASSWORD }} ${{ secrets.STRAVA_EMAIL }} ${{ secrets.STRAVA_PASSWORD }} --is-cn

ps: when initializing for the first time, if you have a large amount of strava data, some data may fail to upload, just retry several times.

Total Data Analysis

Running data display
 

python scripts/gen_svg.py --from-db --title "${{ env.TITLE }}" --type github --athlete "${{ env.ATHLETE }}" --special-distance 10 --special-distance2 20 --special-color yellow --special-color2 red --output assets/github.svg --use-localtime --min-distance 0.5
python scripts/gen_svg.py --from-db --title "${{ env.TITLE_GRID }}" --type grid --athlete "${{ env.ATHLETE }}"  --output assets/grid.svg --min-distance 10.0 --special-color yellow --special-color2 red --special-distance 20 --special-distance2 40 --use-localtime

Generate year circular svg show

python3(python) scripts/gen_svg.py --from-db --type circular --use-localtime

For more display effects, see: https://github.com/flopp/GpxTrackPoster

server(recommendation vercel)

Use Vercel to deploy
 

  • vercel connects to your GitHub repo.


 

image

  • import repo


 

image

  1. Awaiting completion of deployment
  2. Visits

Use Cloudflare to deploy
 

Click Create a project in Pages to connect to your Repo.

After clicking Begin setup, modify Project's Build settings.

Select Framework preset to Gatsby

Scroll down, click Environment variables, then variable below:

Variable name = PYTHON_VERSION, Value = 3.7

Click Save and Deploy

Deploy to GitHub Pages

If you are using a custom domain for GitHub Pages, open .github/workflows/gh-pages.yml, change fqdn value to the domain name of your site.

(Skip this step if you're NOT using a custom domain) Modify gatsby-config.js, change pathPrefix value to the root path. If the repository name is running_page, the value will be /running_page.

Go to repository's Actions -> Workflows -> All Workflows, choose Publish GitHub Pages from the left panel, click Run workflow. Make sure the workflow runs without errors, and gh-pages branch is created.

Go to repository's Settings -> GitHub Pages -> Source, choose Branch: gh-pages, click Save.

GitHub Actions

Modifying information in GitHub Actions 
 

Actions source code The following steps need to be taken

  • change to your app type and info
     

image

  • Add your secret in repo Settings > Secrets (add only the ones you need).


 

image

  • My secret is as follows
     

image

  • Go to repository's Settings -> Code and automation -> Actions ->General, Scroll to the bottom, find Workflow permissions, choose the first option Read and write permissions, click Save.

TODO

  •  Complete this document.
  •  Support Garmin, Garmin China
  •  support for nike+strava
  •  Support English
  •  Refine the code
  •  add new features
  •  tests
  •  support the world map
  •  support multiple types, like hiking, biking~

Contribution

  • Any Issues PR welcome.
  • You can PR share your Running page in README I will merge it.

Before submitting PR:

  • Format Python code with black (black .)

Special thanks

Recommended Forks

Support

Just enjoy it~

FAQ

Strava Api limit

https://www.strava.com/settings/api https://developers.strava.com/docs/#rate-limiting

Strava API Rate Limit Exceeded. Retry after 100 seconds
Strava API Rate Limit Timeout. Retry in 799.491622 seconds

注:如果是之前 clone 或 Fork 的朋友 vercel 显示 404 可能需要更新下代码


Download Details:

Author: yihong0618
Source Code: https://github.com/yihong0618/running_page 
License: MIT license

#python #datavisualization #dataanalysis #mapbox 

Running_page: Make Your Own Running Home Page
Meggie  Flatley

Meggie Flatley

1678250580

CSS Text Explode Generator Built with Vue 3 and Typescript

CSS Text Explode Generator

Make your text "explode" with CSS transform! Built with Vite, Vue 3, DaisyUI, and Typescript.

banner.gif

Frameworks

App

Web : CSS Text Explode Generator

Build

git clone https://github.com/jo0707/text-explode
cd text-explode 
npm install

#preview
npm run dev

#build
npm run build
npm run preview

.gitignore

# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*

node_modules
.DS_Store
dist
dist-ssr
coverage
*.local

/cypress/videos/
/cypress/screenshots/

# Editor directories and files
.vscode/*
.vscode
!.vscode/extensions.json
.idea
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

Download details:

Author: jo0707
Source code: https://github.com/jo0707/text-explode

#vue #typescript #css 

CSS Text Explode Generator Built with Vue 3 and Typescript
Rupert  Beatty

Rupert Beatty

1677147781

Swiftlane: Build Utilities in Pure Swift

Swiftlane

Swiftlane contains a set of build utilities to speed up iOS and macOS development and deployment

There's no additional configuration file, your Swift script file is the source of truth. With auto completion and type safety, you are ensured to do the right things in Swiftlane.

  • Swiftlane and its dependencies are written in pure Swift, making it easy to read and contribute.
  • Use latest Swift features like async/await to enable declarative syntax
  • Type-safe. All required and optional arguments are clear.
  • No configuration file. Your Swift script is your definition.
  • Simple wrapper around existing tools like xcodebuild, instruments and agvtool
  • Reuse awesome Swift scripting dependencies from Swift community

How to use

Swiftlane is intended to be used as a Swift Package. Please consult Examples folder for ways to integrate

  • CLI: make a macOS Command Line Tool project
  • Script: make an executable Swift Package
import Swiftlane
import AppStoreConnect

@main
struct Script {
    static func main() async throws {
        try await deployMyApp()
    }
    
    private static func deployMyApp() async throws {
        var workflow = Workflow()
        workflow.directory = Settings.fs
            .homeDirectory()
            .appendingPathComponent("Projects/swiftlane/Examples/MyApp")
        workflow.xcodeApp = URL(string: "/Applications/Xcode.app")
        
        let build = Build()
        build.project("MyApp")
        build.allowProvisioningUpdates()
        build.destination(platform: .iOSSimulator, name: "iPhone 13")
        build.workflow = workflow
        try await build.run()
        
        guard
            let issuerId = Settings.env["ASC_ISSUER_ID"],
            let privateKeyId = Settings.env["ASC_PRIVATE_KEY_ID"],
            let privateKey = Settings.env["ASC_PRIVATE_KEY"]
        else { return }
        
        let asc = try ASC(
            credential: AppStoreConnect.Credential(
                issuerId: issuerId,
                privateKeyId: privateKeyId,
                privateKey: privateKey
            )
        )
        
        try await asc.fetchCertificates()
        try await asc.fetchProvisioningProfiles()
        
        let keychain = try await Keychain.create(
            path: Keychain.Path(
                rawValue: Settings.fs
                    .downloadsDirectory
                    .appendingPathComponent("custom.keychain")),
            password: "keychain_password"
        )
        try await keychain.unlock()
        try await keychain.import(
            certificateFile: Settings.fs
                .downloadsDirectory
                .appendingPathComponent("abcpass.p12"),
            certificatePassword: "123"
        )
        
    }
}

Actions

iOS

  •  Build: build project
  •  Test: test project
  •  Archive: archive project
  •  ExportArchive: export archive
  •  AppStore Connect: use https://github.com/onmyway133/AppStoreConnect
  •  GetBuildSettings: get project build settings
  •  GenerateIcon: generate app icon set
  •  Screenshot: take screenshot
  •  Frame: frame screenshot
  •  UploadASC: upload IPA to AppStore Connect

ASC

  •  Fetch certificates
  •  Fetch provisioning profiles
  •  Save certificates into file system
  •  Save profiles into file system
  •  Install provisioning profile
  •  Fetch TestFlight versions
  •  Fetch TestFlight builds
  •  Fetch latest TestFlight build number

Project

  •  Set version
  •  Set build number
  •  Increment build number

Keychain

  •  Create custom keychain
  •  Unlock keychain
  •  Delete keychain
  •  List searchable keychain paths
  •  Add keychain to searchable paths
  •  Import certificate into keychain

Simulator

  •  Boot a simulator
  •  Update and style simulator

Xcode

  •  Print current Xcode path

macOS

  •  Notarize: notarize project
  •  MakeDMG: package as DMG
  •  Sparkle: update Spackle Appcast file

Standard

  •  Slack: send message to a Slack channel
  •  RunScript: run arbitrary script
  •  PrintWorkingDirectory: print current working directory
  •  S3: upload to S3
  •  Setapp: upload to Setapp
  •  Download: download file
  •  MoveFile: move file
  •  CopyFile: copy file
  •  AppCenter: use appcenter-cli

Settings

Configurations via Settings

  • Console: log to console
  • FileSystem: interact with file system
  • Environment: read environment values
  • CommandLine: run command line tools

Credit

Download Details:

Author: onmyway133
Source Code: https://github.com/onmyway133/Swiftlane 
License: MIT license

#swift #ios #build 

Swiftlane: Build Utilities in Pure Swift
Rupert  Beatty

Rupert Beatty

1676634002

Weaver: Dependency Injection framework for Swift (iOS/macOS/Linux)

Weaver

Declarative, easy-to-use and safe Dependency Injection framework for Swift (iOS/macOS/Linux)

Watch the video 

Features

  •  Dependency declaration via property wrappers or comments
  •  DI Containers auto-generation
  •  Dependency Graph compile time validation
  •  ObjC support
  •  Non-optional dependency resolution
  •  Type safety
  •  Injection with arguments
  •  Registration Scopes
  •  DI Container hierarchy
  •  Thread safe

Dependency Injection

Dependency Injection basically means "giving an object its instance variables" ¹. It seems like it's not such a big deal, but as soon as a project gets bigger, it gets tricky. Initializers become too complex, passing down dependencies through several layers becomes time consuming and just figuring out where to get a dependency from can be hard enough to give up and finally use a singleton.

However, Dependency Injection is a fundamental aspect of software architecture, and there is no good reason not to do it properly. That's where Weaver can help.

What is Weaver?

Weaver is a declarative, easy-to-use and safe Dependency Injection framework for Swift.

  • Declarative because it allows developers to declare dependencies via annotations directly in the Swift code.
  • Easy-to-use because it generates the necessary boilerplate code to inject dependencies into Swift types.
  • Safe because it's all happening at compile time. If it compiles, it works.

How does Weaver work?

                                                                         |-> validate() -> valid/invalid 
swift files -> scan() -> [Token] -> parse() -> AST -> link() -> Graph -> | 
                                                                         |-> generate() -> source code 

Weaver scans the Swift sources of the project, looking for annotations, and generates an AST (abstract syntax tree). It uses SourceKitten which is backed by Apple's SourceKit.

The AST then goes through a linking phase, which outputs a dependency graph.

Some safety checks are then performed on the dependency graph in order to ensure that the generated code won't crash at runtime. Issues are friendly reported in Xcode to make their correction easier.

Finally, Weaver generates the boilerplate code which can directly be used to make the dependency injections happen.

Installation

(1) - Weaver command

Weaver can be installed using Homebrew, CocodaPods or manually.

Binary form

Download the latest release with the prebuilt binary from release tab. Unzip the archive into the desired destination and run bin/weaver

Homebrew

$ brew install weaver

CocoaPods

Add the following to your Podfile:

pod 'WeaverDI'

This will download the Weaver binaries and dependencies in Pods/ during your next pod install execution and will allow you to invoke it via ${PODS_ROOT}/WeaverDI/weaver/bin/weaver in your Script Build Phases.

This is the best way to install a specific version of Weaver since Homebrew cannot automatically install a specific version.

Mint

To use Weaver via Mint, prefix the normal usage with mint run scribd/Weaver like so:

mint run scribd/Weaver version

To use a specific version of Weaver, add the release tag like so:

mint run scribd/Weaver@1.0.7 version

Building from source

Download the latest release source code from the release tab or clone the repository.

In the project directory, run brew update && brew bundle && make install to build and install the command line tool.

Check installation

Run the following to check if Weaver has been installed correctly.

$ weaver swift --help

Usage:

    $ weaver swift

Options:
    --project-path - Project's directory.
    --config-path - Configuration path.
    --main-output-path - Where the swift code gets generated.
    --tests-output-path - Where the test helpers gets generated.
    --input-path - Paths to input files.
    --ignored-path - Paths to ignore.
    --cache-path - Where the cache gets stored.
    --recursive-off
    --tests - Activates the test helpers' generation.
    --testable-imports - Modules to imports in the test helpers.
    --swiftlint-disable-all - Disables all swiftlint rules.

(2) - Weaver build phase

In Xcode, add the following command to a command line build phase:

weaver swift --project-path $PROJECT_DIR/$PROJECT_NAME --main-output-path output/relative/path

Important - Move this build phase above the Compile Source phase so that Weaver can generate the boilerplate code before compilation happens.

Basic Usage

For a more complete usage example, please check out the sample project.

Let's implement a simple app displaying a list of movies. It will be composed of three noticeable objects:

  • AppDelegate where the dependencies are registered.
  • MovieManager providing the movies.
  • MoviesViewController showing a list of movies at the screen.

Let's get into the code.

AppDelegate with comment annotations:

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {

    var window: UIWindow?

    private let dependencies = MainDependencyContainer.appDelegateDependencyResolver()
    
    // weaver: movieManager = MovieManager <- MovieManaging
    // weaver: movieManager.scope = .container
    
    // weaver: moviesViewController = MoviesViewController <- UIViewController
    // weaver: moviesViewController.scope = .container
    
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
        
        window = UIWindow()

        let rootViewController = dependencies.moviesViewController
        window?.rootViewController = UINavigationController(rootViewController: rootViewController)
        window?.makeKeyAndVisible()
        
        return true
    }
}

AppDelegate registers two dependencies:

  • // weaver: movieManager = MovieManager <- MovieManaging
  • // weaver: moviesViewController = MoviesViewController <- UIViewController

These dependencies are made accessible to any object built from AppDelegate because their scope is set to container:

  • // weaver: movieManager.scope = .container
  • // weaver: moviesViewController.scope = .container

A dependency registration automatically generates the registration code and one accessor in AppDelegateDependencyContainer, which is why the rootViewController can be built:

  • let rootViewController = dependencies.moviesViewController.

AppDelegate with property wrapper annotations:

Since Weaver 1.0.1, you can use property wrappers instead of annotations in comments.

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {

    var window: UIWindow?
    
    // Must be declared first!
    private let dependencies = MainDependencyContainer.appDelegateDependencyResolver()

    @Weaver(.registration, type: MovieManager.self, scope: .container)
    private var movieManager: MovieManaging
    
    @Weaver(.registration, type: MoviesViewController.self, scope: .container)
    private var moviesViewController: UIViewController
    
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
        
        window = UIWindow()

        window?.rootViewController = UINavigationController(rootViewController: moviesViewController)
        window?.makeKeyAndVisible()
        
        return true
    }
}

Note how dependencies can be accessed from the self instance directly.

Also note that the dependencies object must be declared and created prior to any other Weaver annotation. Not doing so would immediately crash the application.

It is possible to use comment and property wrapper annotations in the same type.

MovieManager:

protocol MovieManaging {
    
    func getMovies(_ completion: @escaping (Result<Page<Movie>, MovieManagerError>) -> Void)
}

final class MovieManager: MovieManaging {

    func getMovies(_ completion: @escaping (Result<Page<Movie>, MovieManagerError>) -> Void) {
        // fetches movies from the server...
        completion(.success(movies))        
    }
}

MoviesViewController with comment annotations:

final class MoviesViewController: UIViewController {
    
    private let dependencies: MoviesViewControllerDependencyResolver
    
    private var movies = [Movie]()
    
    // weaver: movieManager <- MovieManaging
    
    required init(injecting dependencies: MoviesViewControllerDependencyResolver) {
        self.dependencies = dependencies
        super.init(nibName: nil, bundle: nil)
    }
    
    override func viewDidLoad() {
        super.viewDidLoad()

        // Setups the tableview... 
        
        // Fetches the movies
        dependencies.movieManager.getMovies { result in
            switch result {
            case .success(let page):
                self.movies = page.results
                self.tableView.reloadData()
                
            case .failure(let error):
                self.showError(error)
            }
        }
    }

    // ... 
}

MoviesViewController declares a dependency reference:

  • // weaver: movieManager <- MovieManaging

This annotation generates an accessor in MoviesViewControllerDependencyResolver, but no registration, which means MovieManager is not stored in MoviesViewControllerDependencyContainer, but in its parent (the container from which it was built). In this case, AppDelegateDependencyContainer.

MoviesViewController also needs to declare a specific initializer:

  • required init(injecting dependencies: MoviesViewControllerDependencyResolver)

This initializer is used to inject the DI Container. Note that MoviesViewControllerDependencyResolver is a protocol, which means a fake version of the DI Container can be injected when testing.

MoviesViewController with property wrapper annotations:

final class MoviesViewController: UIViewController {
    
    private var movies = [Movie]()

    @Weaver(.reference)
    private var movieManager: MovieManaging
    
    required init(injecting _: MoviesViewControllerDependencyResolver) {
        super.init(nibName: nil, bundle: nil)
    }
    
    override func viewDidLoad() {
        super.viewDidLoad()

        // Setups the tableview... 
        
        // Fetches the movies
        movieManager.getMovies { result in
            switch result {
            case .success(let page):
                self.movies = page.results
                self.tableView.reloadData()
                
            case .failure(let error):
                self.showError(error)
            }
        }
    }

    // ... 
}

API

Code Annotations

Weaver allows you to declare dependencies by annotating the code with comments like // weaver: ... or property wrappers like @Weaver(...) var ...

It currently supports the following annotations:

- Registration

Adds the dependency builder to the container.

Adds an accessor for the dependency to the container's resolver protocol.

Example:

// weaver: dependencyName = DependencyConcreteType <- DependencyProtocol

@Weaver(.registration, type: DependencyConcreteType.self) 
var dependencyName: DependencyProtocol

or

// weaver: dependencyName = DependencyConcreteType

@Weaver(.registration) 
var dependencyName: DependencyConcreteType

dependencyName: Dependency's name. Used to make reference to the dependency in other objects and/or annotations.

DependencyConcreteType: Dependency's implementation type. Can be a struct or a class.

DependencyProtocol: Dependency's protocol if any. Optional, you can register a dependency with its concrete type only.

- Reference

Adds an accessor for the dependency to the container's protocol.

Example:

// weaver: dependencyName <- DependencyType

@Weaver(.reference) 
var dependencyName: DependencyType

DependencyType: Either the concrete or abstract type of the dependency. This also defines the type the dependency's accessor returns.

- Parameter

Adds a parameter to the container's resolver protocol. This means that the generated container needs to take these parameter at initialisation. It also means that all the concerned dependency accessors need to take this parameter.

Example:

// weaver: parameterName <= ParameterType

@Weaver(.parameter) 
var parameterName: ParameterType

- Scope

Sets the scope of a dependency. The default scope being container. Only works for registrations or weak parameters.

The scope defines a dependency lifecycle. Four scopes are available:

transient: Always creates a new instance when resolved.

container: Builds an instance at initialization of its container and lives as long as its container lives.

weak: A new instance is created when resolved the first time and then lives as long as its strong references are living.

lazy: A new instance is created when resolved the first time with the same lifetime than its container.

Example:

// weaver: dependencyName.scope = .scopeValue

@Weaver(.registration, scope: .scopeValue)
var dependencyName: DependencyType

scopeValue: Value of the scope. It can be one of the values described above.

- Custom Builder

Overrides a dependency's default initialization code.

Works for registration annotations only.

Example:

// weaver: dependencyName.builder = DependencyType.make

@Weaver(.registration, builder: DependencyType.make) 
var dependencyName: DependencyType

DependencyType.make: Code overriding the dependency's initialization code taking DependencyTypeInputDependencyResolver as a parameter and returning DependencyType (e.g. make's signature could be static func make(_ dependencies: DependencyTypeInputDependencyResolver) -> DependencyType).

Warning - Make sure you don't do anything unsafe with the DependencyResolver parameter passed down in this method since it won't be caught by the dependency graph validator.

- Configuration

Sets a configuration attribute to the concerned object.

Example:

// weaver: dependencyName.attributeName = aValue

@Weaver(..., attributeName: aValue, ...) 
var dependencyName: DependencyType

Configuration Attributes:

isIsolated: Bool (default: false): any object setting this to true is considered by Weaver as an object which isn't used in the project. An object flagged as isolated can only have isolated dependents. This attribute is useful to develop a feature wihout all the dependencies setup in the project.

setter: Bool (default: false): generates a setter (setDependencyName(dependency)) in the dependency container. Note that a dependency using a setter has to be set manually before being accessed through a dependency resolver or it will crash.

objc: Bool (default: false): generates an ObjC compliant resolver for a given dependency, allowing it be accessed from ObjC code.

escaping: Bool (default: true when applicable): asks Weaver to use @escaping when declaring a closure parameter.

platforms: [Platform] (default: []): List of platforms for which Weaver is allowed to use the dependency. An empty list means any platform is allowed.

Using protperty wrappers with parameters:

Types using parameter annotations need to take the said parameters as an input when being registered or referenced. This is particularly true when using property wrappers, because the signature of the annotation won't compile if not done correctly.

For example, the following shows how a type taking two parameters at initialization can be annotated:

final class MovieViewController {

   @Weaver(.parameter) private var movieID: Int
   
   @Weaver(.parameter) private var movieTitle: String
}

And how that same type can be registered and referenced:

@WeaverP2(.registration)
private var movieViewController: (Int, String) -> MovieViewController

@WeaverP2(.reference)
private var moviewViewController: (Int, String) -> MovieViewController

Note that Weaver generates one property wrapper per amount of input parameters, so if a type takes one parameter WeaverP1 shall be used, for two parameters, WeaverP2, and so on.

Writing tests:

Weaver can also generate a dependency container stub which can be used for testing. This feature is accessible by adding the option --tests to the command (e.g. weaver swift --tests).

To compile, the stub expects certain type doubles to be implemented.

For example, given the following code:

final class MovieViewController {
   @Weaver(.reference) private var movieManager: MovieManaging
}

The generated stub expects MovieManagingDouble to be implemented in order to compile.

Testing MoviewViewController can then be written like the following:

final class MovieViewControllerTests: XCTestCase {

    func test_view_controller() {
        let dependencies = MainDependencyResolverStub()
        let viewController = dependencies.buildMovieViewController()
        
        viewController.viewDidLoad()
        
        XCTAssertEqual(dependencies.movieManagerDouble.didRequestMovies, true)
    }
}

Generate Swift Files

To generate the boilerplate code, the swift command shall be used.

$ weaver swift --help

Usage:

    $ weaver swift

Options:
    --project-path - Project's directory.
    --config-path - Configuration path.
    --main-output-path - Where the swift code gets generated.
    --tests-output-path - Where the test helpers gets generated.
    --input-path - Paths to input files.
    --ignored-path - Paths to ignore.
    --cache-path - Where the cache gets stored.
    --recursive-off
    --tests - Activates the test helpers' generation.
    --testable-imports - Modules to imports in the test helpers.
    --swiftlint-disable-all - Disables all swiftlint rules.
    --platform - Targeted platform.
    --included-imports - Included imports.
    --excluded-imports - Excluded imports.

Example:

weaver swift --project-path $PROJECT_DIR/$PROJECT_NAME --main-output-path Generated

Parameters:

  • --project-path: Acts like a base path for other relative paths like config-path, output-path, template-path, input-path and ignored-path. It defaults to the running directory.
  • --config-path: Path to a configuration file. By defaults, Weaver automatically detects .weaver.yaml and .weaver.json located at project-path.
  • --main-output-path: Path where the code will be generated. Defaults to project-path.
  • --tests-output-path: Path where the test utils code will be generated. Defaults to project-path.
  • --input-path: Path to the project's Swift code. Defaults to project-path. Variadic parameter, which means it can be set more than once. By default, Weaver recursively read any Swift file located under the input-path.
  • --ignored-path: Same than input-path but for ignoring files which shouldn't be parsed by Weaver.
  • --recursive-off: Deactivates recursivity for input-path and ignored-path.
  • --tests - Activates the test helpers' generation.
  • --testable-imports - Modules to imports in the test helpers. Variadic parameter, which means it can be set more than once.
  • --swiftlint-disable-all - Disables all swiftlint rules in generated files.
  • --platform - Platform for which the generated code will be compiled (iOS, watchOS, OSX, macOS or tvOS).
  • --included-imports - Modules which can be imported in generated files.
  • --excluded-imports - Modules which can't be imported in generated files.

Configuration File:

Weaver can read a configuration file rather than getting its parameters from the command line. It supports both json and yaml formats.

To configure Weaver with a file, write a file named .weaver.yaml or .weaver.json at the root of your project.

Parameters are named the same, but snakecased. They also work the same way with one exception, project_path cannot be defined in a configuration. Weaver automatically set its value to the configuration file location.

For example, the sample project configuration looks like:

main_output_path: Sample/Generated
input_paths:
  - Sample
ignored_paths:
  - Sample/Generated

Caching & Cleaning

In order to avoid parsing the same swift files over and over again, Weaver has a cache system built in. It means that Weaver won't reprocess files which haven't been changed since last time they got processed.

Using this functionality is great in a development environment because it makes Weaver's build phase much faster most of the time. However, on a CI it is preferable to let Weaver process the Swift files everytime for safety, for which the clean command can be used.

For example, the following always processes all of the swift code:

$ weaver clean
$ weaver swift 

Export Dependency Graph

Weaver can ouput a JSON representation of the dependency graph of a project.

$ weaver json --help
Usage:

    $ weaver json

Options:
    --project-path - Project's directory.
    --config-path - Configuration path.
    --pretty [default: false]
    --input-path - Paths to input files.
    --ignored-path - Paths to ignore.
    --cache-path - Cache path.
    --recursive-off
    --platform - Selected platform

For an output example, please check this Gist.

Migration guides

More content...

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

Talks

Tutorials

If you're looking for a step by step tutorial, check out these links.


Download Details:

Author: scribd
Source Code: https://github.com/scribd/Weaver 
License: MIT license

#swift #ios #managed #terraform #mobile #build #tooling 

Weaver: Dependency Injection framework for Swift (iOS/macOS/Linux)
Kevin  Simon

Kevin Simon

1675847342

How to Package Django Apps Using Docker, NGINX, and Gunicorn

In this tutorial, We will explore setting up a Django app with Gunicorn as the WSGI and NGINX as the proxy server. For ease of setup, the guide will package all these using Docker. It therefore assumes that you have at least intermediate level experience with Docker and Docker Compose and at least beginner level skills in Django.

Creating a Sample Django App

The app will be a simple Django app that displays a "hello world" message using an HTTPResponse. The starter code for the app can be found at this github link.

Download and use it for the rest of the guide.

Packaging the Django App Using Docker

For a multi-container application, this activity is done in two stages: 1) developing the Docker file for the main application, and 2) stitching everything up with the rest of the containers using Docker Compose.

App Docker File

The Docker file is simple. It sets up the Django app within its own image.

FROM python:3.8.3-alpine

ENV MICRO_SERVICE=/home/app/microservice
RUN addgroup -S $APP_USER && adduser -S $APP_USER -G $APP_USER
# set work directory


RUN mkdir -p $MICRO_SERVICE
RUN mkdir -p $MICRO_SERVICE/static

# where the code lives
WORKDIR $MICRO_SERVICE

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# install psycopg2 dependencies
RUN apk update \
    && apk add --virtual build-deps gcc python3-dev musl-dev \
    && apk add postgresql-dev gcc python3-dev musl-dev \
    && apk del build-deps \
    && apk --no-cache add musl-dev linux-headers g++
# install dependencies
RUN pip install --upgrade pip
# copy project
COPY . $MICRO_SERVICE
RUN pip install -r requirements.txt
COPY ./entrypoint.sh $MICRO_SERVICE

CMD ["/bin/bash", "/home/app/microservice/entrypoint.sh"]

Project Docker Compose File

Docker Compose will achieve the following:

To fully build the Nginx container, you need special Docker and conf files for it. Within your sampleApp folder, create a folder named nginx. Within the nginx directory, create a dockerfile and copy the codeblock below:

FROM nginx:1.19.0-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d

In the same folder, create a file named nginx.conf and copy the code block below. This is the code that is responsible for setting up nginx.

upstream sampleapp {
    server web:8000;
}

server {

    listen 80;

    location / {
        proxy_pass http://sampleapp;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }
    location /static/ {
        alias /home/app/microservice/static/;
    }

}

After this is done, create the main docker-compose.yml file. This will be the file responsible for running the whole project. In the main project folder, sampleApp, create a file named docker-compose.yml and copy the code block below.

version: '3.7'

services:
  nginx:
    build: ./nginx
    ports:
      - 1300:80
    volumes:
      - static_volume:/home/app/microservice/static
    depends_on:
      - web
    restart: "on-failure"
  web:
    build: . #build the image for the web service from the dockerfile in parent directory
    command: sh -c "python manage.py makemigrations &&
                    python manage.py migrate &&
                    python manage.py initiate_admin &&
                    python manage.py collectstatic &&
                    gunicorn sampleApp.wsgi:application --bind 0.0.0.0:${APP_PORT}"
    volumes:
      - .:/microservice:rw # map data and files from parent directory in host to microservice directory in docker containe
      - static_volume:/home/app/microservice/static
    env_file:
      - .env
    image: sampleapp

    expose:
      - ${APP_PORT}
    restart: "on-failure"
    depends_on:
      - db
  db:
    image: postgres:11-alpine
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      - postgres_data:/var/lib/postgresql/data/
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${DB_NAME}
      - PGPORT=${DB_PORT}
      - POSTGRES_USER=${POSTGRES_USER}
    restart: "on-failure"


volumes:
  postgres_data:
  static_volume:

Testing the Live Dockerized App

The whole project is set up and all that remains is to run it. Run the below Docker Compose command to spin up the containers.

docker-compose up --build

To test if the whole project works, from the database, application, and nginx containers, access the app's home page and admin page. The home page, URL 0.0.0.0:1300, should display a simple "hello world" message.

The admin page URL is 0.0.0.0:1300/admin. Use the test credentials:

Username: admin

password: mypass123

You should see a screen like the one below.

App hello world

This is what the admin page looks like. Admin page

Happy Coding !!!

Originally published by Kimaru Thagana at pluralsigh

How to Package Django Apps Using Docker, NGINX, and Gunicorn
Lawrence  Lesch

Lawrence Lesch

1674106744

Just: The Task Library That Just Works

Just

Just is a library that organizes build tasks for your JS projects. It consists of

  • a build task definition library
  • sane preset build flows for node and browser projects featuring TypeScript, Webpack and jest

Building

This README contains only the instructions on how to build and contribute to the project. This is a monorepo that uses the lerna monorepo management utility. To get started, simply run the following:

yarn

and build all the packages this way:

yarn build

Development is usually done one package at a time. So go into each package and develop with the innerloop npm script:

cd packages/just-task
yarn dev

Tests are run with the test npm script:

cd packages/just-task
yarn test

Packages

PackageDescription
just-taskThe task definition library that wraps undertaker and yargs libraries
just-scriptsA reusable preset of frequently used tasks in node and browser projects
just-scripts-utilsA set of utilities for just-scripts
just-task-loggerA shared pretty logger used to display timestamps along with a message
documentationThe Docusaurus site content and styles which generates the Github page for this library

Documentation

All the documentation is online at https://microsoft.github.io/just/

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com. Please refer Contribution guide for more details

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Download Details:

Author: Microsoft
Source Code: https://github.com/microsoft/just 
License: MIT license

#typescript #javascript #build 

Just: The Task Library That Just Works
Lawrence  Lesch

Lawrence Lesch

1668082020

Rushstack: Monorepo for tools Developed By The Rush Stack Community

rushstack

The home for various projects maintained by the Rush Stack community, whose mission is to develop reusable tooling for large scale TypeScript monorepos.

Documentation Links

  • What is Rush Stack? - learn about the mission behind these projects
  • API reference - browse API documentation for NPM packages
  • Zulip chat room - chat with the Rush Stack developers
  • Rush - a build orchestrator for large scale TypeScript monorepos
  • API Extractor - create .d.ts rollups and track your TypeScript API signatures
  • API Documenter - use TSDoc comments to publish an API documentation website

Related Repos

These GitHub repositories provide supplementary resources for Rush Stack:

  • rushstack-samples - a monoprepo with sample projects that illustrate various project setups, including how to use Heft with other popular JavaScript frameworks
  • rush-example - a minimal Rush repo that demonstrates the fundamentals of Rush without relying on any other Rush Stack tooling
  • rushstack-legacy - older projects that are still maintained but no longer actively developed

Published Packages

FolderVersionChangelogPackage
/apps/api-documenternpm versionchangelog@microsoft/api-documenter
/apps/api-extractornpm versionchangelog@microsoft/api-extractor
/apps/heftnpm versionchangelog@rushstack/heft
/apps/lockfile-explorernpm versionchangelog@rushstack/lockfile-explorer
/apps/rundownnpm versionchangelog@rushstack/rundown
/apps/rushnpm versionchangelog@microsoft/rush
/eslint/eslint-confignpm versionchangelog@rushstack/eslint-config
/eslint/eslint-patchnpm versionchangelog@rushstack/eslint-patch
/eslint/eslint-pluginnpm versionchangelog@rushstack/eslint-plugin
/eslint/eslint-plugin-packletsnpm versionchangelog@rushstack/eslint-plugin-packlets
/eslint/eslint-plugin-securitynpm versionchangelog@rushstack/eslint-plugin-security
/heft-plugins/heft-dev-cert-pluginnpm versionchangelog@rushstack/heft-dev-cert-plugin
/heft-plugins/heft-jest-pluginnpm versionchangelog@rushstack/heft-jest-plugin
/heft-plugins/heft-sass-pluginnpm versionchangelog@rushstack/heft-sass-plugin
/heft-plugins/heft-serverless-stack-pluginnpm versionchangelog@rushstack/heft-serverless-stack-plugin
/heft-plugins/heft-storybook-pluginnpm versionchangelog@rushstack/heft-storybook-plugin
/heft-plugins/heft-webpack4-pluginnpm versionchangelog@rushstack/heft-webpack4-plugin
/heft-plugins/heft-webpack5-pluginnpm versionchangelog@rushstack/heft-webpack5-plugin
/libraries/api-extractor-modelnpm versionchangelog@microsoft/api-extractor-model
/libraries/debug-certificate-managernpm versionchangelog@rushstack/debug-certificate-manager
/libraries/heft-config-filenpm versionchangelog@rushstack/heft-config-file
/libraries/load-themed-stylesnpm versionchangelog@microsoft/load-themed-styles
/libraries/localization-utilitiesnpm versionchangelog@rushstack/localization-utilities
/libraries/module-minifiernpm versionchangelog@rushstack/module-minifier
/libraries/node-core-librarynpm versionchangelog@rushstack/node-core-library
/libraries/package-deps-hashnpm versionchangelog@rushstack/package-deps-hash
/libraries/rig-packagenpm versionchangelog@rushstack/rig-package
/libraries/rush-libnpm version @microsoft/rush-lib
/libraries/rush-sdknpm version @rushstack/rush-sdk
/libraries/stream-collatornpm versionchangelog@rushstack/stream-collator
/libraries/terminalnpm versionchangelog@rushstack/terminal
/libraries/tree-patternnpm versionchangelog@rushstack/tree-pattern
/libraries/ts-command-linenpm versionchangelog@rushstack/ts-command-line
/libraries/typings-generatornpm versionchangelog@rushstack/typings-generator
/libraries/worker-poolnpm versionchangelog@rushstack/worker-pool
/rigs/heft-node-rignpm versionchangelog@rushstack/heft-node-rig
/rigs/heft-web-rignpm versionchangelog@rushstack/heft-web-rig
/rush-plugins/rush-amazon-s3-build-cache-pluginnpm version @rushstack/rush-amazon-s3-build-cache-plugin
/rush-plugins/rush-azure-storage-build-cache-pluginnpm version @rushstack/rush-azure-storage-build-cache-plugin
/rush-plugins/rush-serve-pluginnpm versionchangelog@rushstack/rush-serve-plugin
/webpack/hashed-folder-copy-pluginnpm versionchangelog@rushstack/hashed-folder-copy-plugin
/webpack/loader-load-themed-stylesnpm versionchangelog@microsoft/loader-load-themed-styles
/webpack/loader-raw-scriptnpm versionchangelog@rushstack/loader-raw-script
/webpack/preserve-dynamic-require-pluginnpm versionchangelog@rushstack/webpack-preserve-dynamic-require-plugin
/webpack/set-webpack-public-path-pluginnpm versionchangelog@rushstack/set-webpack-public-path-plugin
/webpack/webpack-plugin-utilitiesnpm versionchangelog@rushstack/webpack-plugin-utilities
/webpack/webpack4-localization-pluginnpm versionchangelog@rushstack/webpack4-localization-plugin
/webpack/webpack4-module-minifier-pluginnpm versionchangelog@rushstack/webpack4-module-minifier-plugin
/webpack/webpack5-localization-pluginnpm versionchangelog@rushstack/webpack5-localization-plugin
/webpack/webpack5-module-minifier-pluginnpm versionchangelog@rushstack/webpack5-module-minifier-plugin

Unpublished Local Projects

FolderDescription
/apps/lockfile-explorer-webRush Lockfile Explorer: helper project for building the React web application component
/build-tests-samples/heft-node-basic-tutorial(Copy of sample project) Building this project is a regression test for Heft
/build-tests-samples/heft-node-jest-tutorial(Copy of sample project) Building this project is a regression test for Heft
/build-tests-samples/heft-node-rig-tutorial(Copy of sample project) Building this project is a regression test for Heft
/build-tests-samples/heft-serverless-stack-tutorial(Copy of sample project) Building this project is a regression test for Heft
/build-tests-samples/heft-storybook-react-tutorial(Copy of sample project) Building this project is a regression test for Heft
/build-tests-samples/heft-storybook-react-tutorial-storykitStorybook build dependencies for heft-storybook-react-tutorial
/build-tests-samples/heft-web-rig-app-tutorial(Copy of sample project) Building this project is a regression test for Heft
/build-tests-samples/heft-web-rig-library-tutorial(Copy of sample project) Building this project is a regression test for Heft
/build-tests-samples/heft-webpack-basic-tutorial(Copy of sample project) Building this project is a regression test for Heft
/build-tests-samples/packlets-tutorial(Copy of sample project) Building this project is a regression test for @rushstack/eslint-plugin-packlets
/build-tests/api-documenter-scenariosBuilding this project is a regression test for api-documenter
/build-tests/api-documenter-testBuilding this project is a regression test for api-documenter
/build-tests/api-extractor-lib1-testBuilding this project is a regression test for api-extractor
/build-tests/api-extractor-lib2-testBuilding this project is a regression test for api-extractor
/build-tests/api-extractor-lib3-testBuilding this project is a regression test for api-extractor
/build-tests/api-extractor-scenariosBuilding this project is a regression test for api-extractor
/build-tests/api-extractor-test-01Building this project is a regression test for api-extractor
/build-tests/api-extractor-test-02Building this project is a regression test for api-extractor
/build-tests/api-extractor-test-03Building this project is a regression test for api-extractor
/build-tests/api-extractor-test-04Building this project is a regression test for api-extractor
/build-tests/eslint-7-testThis project contains a build test to validate ESLint 7 compatibility with the latest version of @rushstack/eslint-config (and by extension, the ESLint plugin)
/build-tests/hashed-folder-copy-plugin-webpack4-testBuilding this project exercises @rushstack/hashed-folder-copy-plugin with Webpack 4.
/build-tests/hashed-folder-copy-plugin-webpack5-testBuilding this project exercises @rushstack/hashed-folder-copy-plugin with Webpack 5. NOTE - THIS TEST IS CURRENTLY EXPECTED TO BE BROKEN
/build-tests/heft-action-pluginThis project contains a Heft plugin that adds a custom action
/build-tests/heft-action-plugin-testThis project exercises a custom Heft action
/build-tests/heft-copy-files-testBuilding this project tests copying files with Heft
/build-tests/heft-example-plugin-01This is an example heft plugin that exposes hooks for other plugins
/build-tests/heft-example-plugin-02This is an example heft plugin that taps the hooks exposed from heft-example-plugin-01
/build-tests/heft-fastify-testThis project tests Heft support for the Fastify framework for Node.js services
/build-tests/heft-jest-reporters-testThis project illustrates configuring Jest reporters in a minimal Heft project
/build-tests/heft-minimal-rig-testThis is a minimal rig package that is imported by the 'heft-minimal-rig-usage-test' project
/build-tests/heft-minimal-rig-usage-testA test project for Heft that resolves its compiler from the 'heft-minimal-rig-test' package
/build-tests/heft-node-everything-esm-module-testBuilding this project tests every task and config file for Heft when targeting the Node.js runtime when configured to use ESM module support
/build-tests/heft-node-everything-testBuilding this project tests every task and config file for Heft when targeting the Node.js runtime
/build-tests/heft-parameter-pluginThis project contains a Heft plugin that adds a custom parameter to built-in actions
/build-tests/heft-parameter-plugin-testThis project exercises a built-in Heft action with a custom parameter
/build-tests/heft-sass-testThis project illustrates a minimal tutorial Heft project targeting the web browser runtime
/build-tests/heft-typescript-composite-testBuilding this project tests behavior of Heft when the tsconfig.json file uses project references.
/build-tests/heft-web-rig-library-testA test project for Heft that exercises the '@rushstack/heft-web-rig' package
/build-tests/heft-webpack4-everything-testBuilding this project tests every task and config file for Heft when targeting the web browser runtime using Webpack 4
/build-tests/heft-webpack5-everything-testBuilding this project tests every task and config file for Heft when targeting the web browser runtime using Webpack 5
/build-tests/install-test-workspace 
/build-tests/localization-plugin-test-01Building this project exercises @microsoft/localization-plugin. This tests that the plugin works correctly without any localized resources.
/build-tests/localization-plugin-test-02Building this project exercises @microsoft/localization-plugin. This tests that the loader works correctly with the exportAsDefault option unset.
/build-tests/localization-plugin-test-03Building this project exercises @microsoft/localization-plugin. This tests that the plugin works correctly with the exportAsDefault option set to true.
/build-tests/rush-amazon-s3-build-cache-plugin-integration-testTests connecting to an amazon S3 endpoint
/build-tests/rush-project-change-analyzer-testThis is an example project that uses rush-lib's ProjectChangeAnalyzer to
/build-tests/set-webpack-public-path-plugin-webpack4-testBuilding this project tests the set-webpack-public-path-plugin using Webpack 4
/build-tests/ts-command-line-testBuilding this project is a regression test for ts-command-line
/libraries/rushellExecute shell commands using a consistent syntax on every platform
/repo-scripts/doc-plugin-rush-stackAPI Documenter plugin used with the rushstack.io website
/repo-scripts/generate-api-docsUsed to generate API docs for the rushstack.io website
/repo-scripts/repo-toolboxUsed to execute various operations specific to this repo
/rush-plugins/rush-litewatch-pluginAn experimental alternative approach for multi-project watch mode

Contributor Notice

This repo welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This repo has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Download Details:

Author: Microsoft
Source Code: https://github.com/microsoft/rushstack 

#typescript #nodejs #api #toolchain #build 

Rushstack: Monorepo for tools Developed By The Rush Stack Community
Awesome  Rust

Awesome Rust

1657998900

Cargo Udeps - Find Unused Dependencies in Cargo.toml

cargo-udeps

Find unused dependencies in Cargo.toml.

While compilation of this tool also works on Rust stable, it needs Rust nightly to actually run.

Installation

GitHub Releases

https://github.com/est31/cargo-udeps/releases

cargo install (crates.io)

cargo install cargo-udeps --locked

cargo install (master)

cargo install --git https://github.com/est31/cargo-udeps --locked

Dedicated packages

Some GNU/Linux distros have packaged cargo-udeps:

  • Nix/Nix OS: cargo-udeps
  • Arch Linux: pacman -S cargo-udeps

Usage

cargo +nightly udeps

It either prints out a "unused crates" line listing the crates, or it prints out a line saying that no crates were unused.

Ignoring some of the dependencies

To ignore some of the dependencies, add package.metadata.cargo-udeps.ignore to Cargo.toml.

[package.metadata.cargo-udeps.ignore]
normal = ["if_chain"]
#development = []
#build = []

[dependencies]
if_chain = "1.0.0" # Used only in doc-tests, which `cargo-udeps` cannot check.

Known bugs

Some unused crates might not be detected. This includes crates used by std and its dependencies as well as crates that are already being used by dependencies of the studied crate.

Crates are currently only handled on a per name basis. Two crates with the same name but different versions would be a problem.

Trophy case

This is a list of cases where unused dependencies were found using cargo-udeps. You are welcome to expand it:

Download Details:
Author: est31
Source Code: https://github.com/est31/cargo-udeps
License: View license

#rust #rustlang

Cargo Udeps - Find Unused Dependencies in Cargo.toml
Veronica  Roob

Veronica Roob

1654115460

Builds PHP So That Multiple Versions Can Be Used Side By Side

php-build

php-build is a utility for building versions of PHP to use them side by side with each other. The overall structure is loosly borrowed from Sam Stephenson's ruby-build.

Installation

As phpenv plugin

With phpenv via installer

It's the standard way: installs phpenv in $HOME/.phpenv (default $PHPENV_ROOT value).

curl -L https://raw.githubusercontent.com/phpenv/phpenv-installer/master/bin/phpenv-installer \
    | bash

See more on https://github.com/phpenv/phpenv-installer: install phpenv + php-build/php-build (and other plugins), updating all of them when you want to!

With phpenv manually

Locate your phpenv directory:

% ls $HOME/.phpenv

Clone the Git repository into phpenv plugins directory:

% git clone https://github.com/php-build/php-build.git $HOME/.phpenv/plugins/php-build

Now you can use php-build as phpenv plugin, as follows:

% phpenv install <definition>

The built version will be installed into $HOME/.phpenv/versions/<definition>.

As standalone php-build

Clone the Git Repository:

% git clone https://github.com/php-build/php-build.git

Then go into the extracted/cloned directory and run:

% ./install.sh

This installs php-build to the default prefix /usr/local.

To install php-build to an other location than /usr/local set the PREFIX environment variable:

% PREFIX=$HOME/local ./install.sh

If you don't have permissions to write to the prefix, then you have to run install.sh as superuser, either via su -c or via sudo.

Contributing

Issue reports and pull requests are always welcome.

All contributions will be reviewed and merged by the core team:

See CONTRIBUTING.md.

Changelog

See CHANGELOG.md.

License

php-build is released under the MIT License.

Author: php-build
Source Code: https://github.com/php-build/php-build
License: MIT License

#php #build 

Builds PHP So That Multiple Versions Can Be Used Side By Side
Hermann  Frami

Hermann Frami

1652967000

A Node.js Focused Build Plugin for Serverless

A Node.js focused build optimizer plugin for serverless.

Replaces packaging functionality for Node.JS functions. 
Typically this will result in significant size reductions, especially with webpack

  • Bundling of functions in a Node.js friendly way
  • Arbitrary builds, with your own build file or a webpack config.

INSTALLATION

Serverless build plugin is available as an NPM module.

yarn add --dev serverless-build-plugin

Once installed, add serverless-build-plugin to your serverless plugin registry.

CONFIGURATION

Serverless build plugin can be configured by either or both of the following methods:

  • Creating a serverless.build.yml file.
  • Setting a custom.build section in your project's serverless.yml.

If no configuration is found, defaults settings are used.

Serverless projects can be built using one of the following methods:

  • bundle
    • Bundle your functions - keeps their directory structure. Based on globbing and module dependency resolution.
  • file
    • Understands a webpack.config.js
    • Any file can be specified, as long as the default export is a function wich returns Promise<string|Buffer|stream>

See test/1.0 for an example project.

bundle

The bundle build method.

  • Node.JS optimized version of the package built-in plugin.
  • Each file can be optionally transpiled with:
    • babel
    • uglify
  • node_modules are whitelisted based on the package.json dependencies, resolved recursively and reliably.

To use babeli, add it to your .babelrc with the preset

method: bundle

# babel
#
# Each file can be babel transpiled. When set to:
#   - An object, the object is parsed as babel configuration.
#   - `true`, a `.babelrc` in the service's directory is used as babel configuration.
#  Default is `null`.
babel: true

# Define core babel package
# Default is "babel-core", can be "@babel/core" for babel 7 (see https://babeljs.io/docs/en/v7-migration)
babelCore: "babel-core"

# uglify
#
# To minify each file.
# Default is `false`.
uglify: false

# uglifyModules
#
# `node_modules` will be uglified. Requires `uglify` to be `true`.
uglifyModules: true

# uglifySource
#
# source will be uglified. Requires `uglify` to be `true`.
uglifySource: false

# sourceMaps
#
# Includes inline source maps for `babel` and `uglify`.
# Default is `true`.
sourceMaps: true

# functions
#
# Like the serverless.yml functions definition, but only for build options
functions:
  myFunction:
    # include
    #
    # An array of glob patterns to match against, including each file
    include:
      - functions/one/**
      - lib/one/**

    # exclude
    #
    # An array of glob patterns to exclude from the `include`
    exclude:
      - **/*.json

    modules:
      # modules.exclude
      #
      # Exclude specific node_modules for a function
      exclude:
        - lutils

      # modules.excludeDeep
      #
      # Exclude deeply nested node_modules for a function
      excludeDeep:
        - lutils

# include
#
# Included for all functions
include:
  - "someFolder"
# exclude
#
# Excluded for all functions
exclude:
  - "*" # Ignores the root directory

file

The file build method.

  • Use a build file to package functions
  • Use webpack, by exporting a webpack config
method: file

# tryFiles
#
# An array of file patterns to match against as a build file
# This allows you to prefer certain methods over others when
# selecting a build file.
tryFiles:
  - 'webpack.config.js'

# Customize your file extension for locating your entry points in webpack
# Eg. if using TypeScript, set it to `ts`, so that a functions handler of src/myStuff/handler.handler file resolves to ./src/myStuff/handler.ts
handlerEntryExt: 'js' 

The build file handles the default export with this logic:

  • First resolves any Function or Promise to its value
  • When Object:
    • Treat as a webpack.config.js config
    • Uses your projects version of webpack (peer dependency)
    • externals are recognizes as node_modules to bundle up seperately
    • entry can be used, and will be concat with the handler.js
    • Creates a handler.js and handler.map.js for the current function
  • When String or Buffer or ReadStream:
    • Creates a handler.js for the current function
    • NOTE: Best to use a ReadStream for memory usage

Build files are triggered with these params:

/**
 *  @param fnConfig {Object}
 *  @param serverlessBuild {ServerlessBuildPlugin}
 */
export default async function myBuildFn(fnConfig, serverlessBuild) {
  // ... do stuff, any stuff

  return "console.log('it works');"
}

SHARED OPTIONS

# modules
#
# Excluded node_modules for all functions (bundle or file methods)
modules:
  # modules.exclude
  #
  # Exclude specific node_modules
  exclude:
    - aws-sdk

  # modules.excludeDeep
  #
  # Exclude deeply nested node_modules
  deepExclude: # Excluded from deep nested node_modules
    - aws-sdk

# async
#
# When false, function builds will run in parellel
# This will distrupt logging consistancy.
synchronous: true

# zip
#
# Options to pass to the `archiver` zipping instances
zip:
  gzip: true
  gzipOptions: { level: 5 }

USAGE

Serverless build uses the sls deploy and sls deploy function CLI commands, overriding the standard build functionality.

  • Configure serverless build
  • Configure serverless with AWS credentials
  • sls deploy to deploy your resources and all functions at once
  • sls invoke -l -f <fnName> to invoke a deployed function
  • sls deploy function -f <fnName> to deploy a single function
  • NODE_ENV=production sls deploy function -f <fnName> when your build process cares about process.env.NODE_ENV

TEST IT OUT

If you'd like to test out a preconfigured project...

git clone git@github.com:nfour/serverless-build-plugin
cd serverless-build-plugin
yarn
yarn build
yarn link
cd test/1.0
yarn
yarn link serverless-build-plugin

sls deploy
sls invoke -f one -l
sls deploy function -f two
sls invoke -f two -l

If you want to audit the built zip, run:

sls package

Then check the .serverless/artifacts directory

NOTICE: No longer maintained.

Author: Nfour
Source Code: https://github.com/nfour/serverless-build-plugin 
License: MIT license

#serverless #node #build 

A Node.js Focused Build Plugin for Serverless
Annie  Emard

Annie Emard

1651579680

Cargo Udeps: Find Unused Dependencies in Cargo.toml

cargo-udeps

Find unused dependencies in Cargo.toml.

While compilation of this tool also works on Rust stable, it needs Rust nightly to actually run.

Installation

GitHub Releases

https://github.com/est31/cargo-udeps/releases

cargo install (crates.io)

cargo install cargo-udeps --locked

cargo install (master)

cargo install --git https://github.com/est31/cargo-udeps --locked

Dedicated packages

Some GNU/Linux distros have packaged cargo-udeps:

  • Nix/Nix OS: cargo-udeps
  • Arch Linux: pacman -S cargo-udeps

Usage

cargo +nightly udeps

It either prints out a "unused crates" line listing the crates, or it prints out a line saying that no crates were unused.

Ignoring some of the dependencies

To ignore some of the dependencies, add package.metadata.cargo-udeps.ignore to Cargo.toml.

[package.metadata.cargo-udeps.ignore]
normal = ["if_chain"]
#development = []
#build = []

[dependencies]
if_chain = "1.0.0" # Used only in doc-tests, which `cargo-udeps` cannot check.

Known bugs

Some unused crates might not be detected. This includes crates used by std and its dependencies as well as crates that are already being used by dependencies of the studied crate.

Crates are currently only handled on a per name basis. Two crates with the same name but different versions would be a problem.

Trophy case

This is a list of cases where unused dependencies were found using cargo-udeps. You are welcome to expand it:

License

This tool is distributed under the terms of both the MIT license and the Apache License (Version 2.0), at your option.

See LICENSE for details.

License of your contributions

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Author: est31
Source Code: https://github.com/est31/cargo-udeps
License: View license

#rust 

Cargo Udeps: Find Unused Dependencies in Cargo.toml

Dockerを使用してDjangoプロジェクトを設定する

Djangoプロジェクトを実現するには、ほとんどの場合、ライブラリまたは依存関係の形式の既成のソリューションが必要です。

これは通常問題ではなく、requirements.txtファイルに文書化されていることがよくあります。

残念ながら、ライブラリと依存関係に大幅な変更を加えるたびにユーザーが最初からセットアップを実行する必要があるため、プロジェクト全体を実行してテストしたい別の個人と共有しようとすると、問題が発生します。

Dockerとは何ですか?

そこで、コンテナ化とDockerが登場します。Dockerは、ライブラリと依存関係の問題を完全に解決する、非常に人気のあるコンテナ化プラットフォームです。

しかし、その最高の機能は?ホストまたは基盤となるインフラストラクチャに関係なく、コンテナ化されたアプリケーションは常に同じ方法で実行されます。

このガイドでは、Dockerを使用してDjangoプロジェクトを設定する方法について説明します。

なぜDockerを使用する必要があるのですか?

Dockerは、オペレーティングシステム(OS)レベルでハードウェア仮想化を提供する製品です。この機能により、開発者はソフトウェアとその依存関係をパッケージ化して出荷し、コンテナーとして配布できます。

簡単に言うと、ソフトウェアに必要なすべての要素をDockerイメージと呼ばれる単一のユニットにまとめて、このイメージを誰とでも出荷または共有できるようになりました。また、受信者がDockerを持っている限り、プロジェクトを実行またはテストできます。「でも、私のマシンではうまくいきました!」という時代は終わりました。

Dockerは、DockerHubと呼ばれるサービスも提供します。これにより、開発者とコミュニティ間でDockerイメージを共有および管理できます。基本的にはDockerイメージの「GitHub」です。Docker CLIに含まれるCLIコマンドを介した画像のアップロードやダウンロードなど、コードリポジトリプラットフォームといくつかの類似点を共有しています。

Dockerを使用するための前提条件

このチュートリアルでは、DockerスクリプトはYAMLファイルで実行され、ファイルはDockerCLIを介して実行されます。

このガイドでは、UbuntuマシンでのDockerのセットアップについて説明します。

その他の一般的なOSプラットフォームの場合:

  1. Windows、このページに従ってください。
  2. MacO、このページに従ってください。

Dockerをダウンロードしてセットアップするには、以下の手順を実行します。

sudo apt-get update  
sudo apt-get install docker-ce docker-ce-cli containerd.io  

Djangoアプリ

このガイドでは、すでにDjangoに習熟していることを前提としているため、Dockerで基本的なDjango Rest Frameworkアプリを実行し、デフォルトのページを表示する手順を示しましょう。

Hello worldそれをDjangoとDockerのものと考えてください。この後、以前または将来のDjangoプロジェクト、特にライブラリがにリストされているプロジェクトをドッキングできますrequirements.txt

開始するには、以下のコマンドを実行してDjangoアプリをセットアップします。

django-admin startproject dj_docker_drf

プロジェクトフォルダに移動し、という名前のアプリを起動して、のリストにとsampleを追加します。rest_frameworksampleINSTALLED_APPSsettings.py

views.pyファイルに、「HELLO WORLD FROMDJANGOANDDOCKER」というメッセージを返す関数を記述します。

from rest_framework.views import APIView  
from django.http import JsonResponse  

class HomeView(APIView):  

 def get(self, request, format=None):
    return JsonResponse({"message":
    'HELLO WORLD FROM DJANGO AND DOCKER'})  

HomeViewユーザーがブラウザでアプリにアクセスしたときにアクセスされるデフォルトのビューになるように、メインURLファイルとアプリURLファイルを接続します。

多くの人が忘れている重要なステップはALLOWED_HOSTS、任意のIPからDjangoアプリケーションへのアクセスを許可するために「*」に設定することです。コードスニペットは以下で共有されます

ALLOWED_HOSTS = [‘*’]

最後に、requirements.txt通常存在するルートプロジェクトフォルダーにファイルを作成し、DRFライブラリを追加します。

django-rest-framework==0.1.0  

これで、アプリをドッキングする準備が整いました。

DockerファイルとDockerCLIの作成

Dockerファイルに名前が付けられていることに注意してください。これは、DockerCLIがそれを追跡できるようにするためです。

プロジェクトルートで、という名前のファイルを作成してDockerfile開きます。Dockerディレクティブは、コメントによって説明されています。

# base image  
FROM python:3.8   
# setup environment variable  
ENV DockerHOME=/home/app/webapp  

# set work directory  
RUN mkdir -p $DockerHOME  

# where your code lives  
WORKDIR $DockerHOME  

# set environment variables  
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1  
# install dependencies  
RUN pip install --upgrade pip  
# copy whole project to your docker home directory. COPY . $DockerHOME  
# run this command to install all dependencies  
RUN pip install -r requirements.txt  
# port where the Django app runs  
EXPOSE 8000  
# start server  
CMD python manage.py runserver  

Dockerでアプリを実行する

アプリを実行するには、2つの手順を実行するだけです。

  1. イメージをビルドする:これは、作成したばかりのbuildコマンドを使用して実行されます。Dockerfileイメージをビルドするには、コマンドを実行しdocker build . -t docker-django-v0.0ます。このコマンドは、Dockerファイルが存在するディレクトリで実行する必要があります。フラグは-t画像にタグを付けて、コンテナを実行するときに参照できるようにします。
  2. イメージを実行します。これは、docker runコマンドを使用して実行されます。これにより、ビルドされたイメージが実行中のコンテナに変換されます

これで、アプリを使用する準備が整いました。

アプリを実行するには、コマンドdocker run docker-django-v0.0を実行し、ブラウザで0.0.0.0:8000にアプリを表示します。

DockerComposeで複数のコンテナーを実行する

Dockerで習熟した後、論理的な次のステップは、複数のコンテナーを実行する方法とその順序を知ることです。

これは、あらゆる種類のマルチコンテナアプリケーションを定義および実行するために使用されるツールであるDockerComposeの完璧なユースケースです。簡単に言うと、アプリケーションに複数のコンテナーがある場合は、Docker-Compose CLIを使用して、それらすべてを必要な順序で実行します。

たとえば、次のコンポーネントを備えたWebアプリケーションを考えてみましょう。

  1. NGINXなどのWebサーバーコンテナ
  2. Djangoアプリをホストするアプリケーションコンテナ
  3. POSTGRESなどの本番データベースをホストするデータベースコンテナ
  4. RabbitMqなどのメッセージブローカーをホストするメッセージコンテナ

このようなシステムを実行するには、Docker-composeYAMLファイルでディレクティブを宣言します。このファイルには、イメージの構築方法、イメージにアクセスできるポート、そして最も重要なこととして、コンテナーが実行される順序(つまり、 、どのコンテナは、プロジェクトを実行するコンテナによって異なります)。

この特定の例では、計算された推測を行うことができますか?スピンアップする最初のコンテナはどれで、どのコンテナが他のコンテナに依存していますか?

この質問に答えるために、DockerComposeについて説明します。まず、このガイドに従って、ホストオペレーティングシステムにCLIツールをインストールします。

Docker Compose(およびDockerと同様)では、特別な名前の特定のファイルが必要です。これは、CLIツールが画像をスピンアップして実行するために使用するものです。

Docker Composeファイルを作成するには、YAMLファイルを作成して名前を付けますdocker-compose.yml。これは、理想的にはプロジェクトのルートディレクトリに存在する必要があります。このプロセスをよりよく理解するために、上記のシナリオを使用してDocker Composeを調べてみましょう。Postgresデータベースを備えたDjangoアプリ、RabbitMQメッセージブローカー、およびNGINXロードバランサーです。

DjangoアプリでのDockerComposeの使用

version: '3.7'

services: # the different images that will be running as containers
  nginx: # service name
    build: ./nginx # location of the dockerfile that defines the nginx image. The dockerfile will be used to spin up an image during the build stage
    ports:
      - 1339:80 # map the external port 1339 to the internal port 80. Any traffic from 1339 externally will be passed to port 80 of the NGINX container. To access this app, one would use an address such as 0.0.0.0:1339
    volumes: # static storages provisioned since django does not handle static files in production
      - static_volume:/home/app/microservice/static # provide a space for static files
    depends_on:
      - web # will only start if web is up and running
    restart: "on-failure" # restart service when it fails
  web: # service name
    build: . #build the image for the web service from the dockerfile in parent directory.
    # command directive passes the parameters to the service and they will be executed by the service. In this example, these are django commands which will be executed in the container where django lives.
    command: sh -c "python manage.py makemigrations &&
                    python manage.py migrate &&
                    gunicorn microservice_sample_app.wsgi:application --bind 0.0.0.0:${APP_PORT}" # Django commands to run app using gunicorn
    volumes:
      - .:/microservice # map data and files from parent directory in host to microservice directory in docker container
      - static_volume:/home/app/microservice/static
    env_file: # file where env variables are stored. Used as best practice so as not to expose secret keys
      - .env # name of the env file
    image: microservice_app # name of the image

    expose: # expose the port to other services defined here so that they can access this service via the exposed port. In the case of Django, this is 8000 by default
      - ${APP_PORT} # retrieved from the .env file
    restart: "on-failure"
    depends_on: # cannot start if db service is not up and running
      - db
  db: # service name
    image: postgres:11-alpine # image name of the postgres database. during build, this will be pulled from dockerhub and a container spun up from it
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      - postgres_data:/var/lib/postgresql/data/
    environment: # access credentials from the .env file
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${DB_NAME}
      - PGPORT=${DB_PORT}
      - POSTGRES_USER=${POSTGRES_USER}
    restart: "on-failure"
  rabbitmq:
        image: rabbitmq:3-management-alpine # image to be pulled from dockerhub during building
        container_name: rabbitmq # container name
        volumes: # assign static storage for rabbitmq to run
           rabbitmq: - ./.docker/rabbitmq/etc/:/etc/rabbitmq/
            - ./.docker/rabbitmq/data/:/var/lib/rabbitmq/
           rabbitmq_logs:  - ./.docker/rabbitmq/logs/:/var/log/rabbitmq/
        environment: # environment variables from the referenced .env file
            RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE}
            # auth cretendials
            RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER} 
            RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
        ports: # map external ports to this specific container's internal ports
            - 5672:5672
            - 15672:15672
        depends_on: # can only start if web service is running
            - web


volumes:
  postgres_data:
  static_volume:
  rabbitmq:
  rabbitmq_logs:

Docker Composeのハイライトの1つは、depends_onディレクティブです。上記のスクリプトから、次のことが推測できます。

  • NGINXはWebに依存しています
  • Webはdbに依存します
  • RabbitMQはWebに依存します

この設定では、dbが最初に起動し、次にweb、次にRabbitMQ、最後にNGINXが起動します。環境を破壊して実行中のコンテナを停止する場合は、順序が逆になります。NGINXが最初に実行され、dbが最後に実行されます。

DockerComposeスクリプトのビルドと実行

Dockerスクリプトと同様に、Docker Composeスクリプトも、ビルドコマンドと実行コマンドを備えているという点で同様の構造になっています。buildコマンドはservices、依存関係階層の順序で下で定義されたすべてのイメージをビルドします。runコマンドは、依存関係の階層順にコンテナを起動します。

幸い、ビルドと実行の両方を組み合わせたコマンドがあります。それはと呼ばれupます。このコマンドを実行するには、以下のコマンドを実行します。

 docker-compose up

ビルドフラグを追加することもできます。これは、以前にこのコマンドを実行したことがあり、新しいイメージをもう一度作成したい場合に便利です。

docker-compose up —build

コンテナの使用が終了したら、すべてのコンテナをシャットダウンして、使用していた静的ストレージ(postgres静的ボリュームなど)を削除することをお勧めします。これを行うには、次のコマンドを実行します。

docker-compose down -V

-Vフラグはボリュームを表します。これにより、コンテナーと接続されているボリュームが確実にシャットダウンされます。

公式ドキュメントに従って、さまざまなDockerComposeコマンドとその使用法について詳しく学んでください。

Dockerでサポートするファイル

上記のスクリプトで参照されているファイルの中には、ファイルのかさばりを減らし、コード管理を容易にするものがあります。これらには、ENVファイルとNGINXDockerおよび構成ファイルが含まれます。以下は、それぞれに伴うもののサンプルです。

ENVファイル
このファイルの主な目的は、キーやクレデンシャルなどの変数を保存することです。これは、個人キーが公開されないようにする安全なコーディング手法です。

#Django
SECRET_KEY="my_secret_key"
DEBUG=1
ALLOWED_HOSTS=localhost 127.0.0.1 0.0.0.0 [::1] *


# database access credentials
ENGINE=django.db.backends.postgresql
DB_NAME=testdb
POSTGRES_USER=testuser
POSTGRES_PASSWORD=testpassword
DB_HOST=db
DB_PORT=5432
APP_PORT=8000
#superuser details
DJANGO_SU_NAME=test
DJANGO_SU_EMAIL=admin12@admin.com
DJANGO_SU_PASSWORD=mypass123
#rabbitmq
RABBITMQ_ERLANG_COOKIE: test_cookie
RABBITMQ_DEFAULT_USER: default_user
RABBITMQ_DEFAULT_PASS: sample_password

NGINXDockerファイル

nginxこれは、ルートディレクトリのフォルダでホストされます。これには主に、Dockerハブから取得するイメージ名と構成ファイルの場所の2つのディレクティブが含まれています。他のDockerファイルと同じ名前が付けられていますDockerfile

FROM nginx:1.19.0-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d

NGINX構成ファイル

これは、NGINX構成ロジックを作成する場所です。これは、NGINXフォルダー内のNGINXDockerファイルと同じ場所に配置されます。

このファイルは、NGINXコンテナがどのように動作するかを決定するものです。以下は、一般に。という名前のファイルに存在するサンプルスクリプトですnginx.conf

upstream microservice { # name of our web image
    server web:8000; # default django port
}

server {

    listen 80; # default external port. Anything coming from port 80 will go through NGINX

    location / {
        proxy_pass http://microservice_app;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }
    location /static/ {
        alias /home/app/microservice/static/; # where our static files are hosted
    }

}

結論

このガイドのDockerのヒントとコツは、あらゆる組織のDevOpsとフルスタック開発者のポジションに不可欠です。Dockerはバックエンド開発者にとっても便利なツールです。

Dockerは依存関係とライブラリをパッケージ化するため、新しい開発者は必ずしもいくつかの依存関係をインストールする必要はなく、ライブラリと依存関係を機能させるために貴重な時間を失うことになります。読んでくれてありがとう。

ソース:https ://blog.logrocket.com/dockerizing-a-django-app/

#django  #docker 

Dockerを使用してDjangoプロジェクトを設定する

Dockerización De Una Aplicación Django

Para actualizar un proyecto de Django, la mayoría de las veces necesita una solución estándar en forma de biblioteca o dependencia.

Por lo general, esto no es un problema y, a menudo, se documenta en el requirements.txtarchivo.

El problema comienza cuando intenta compartir todo el proyecto con otra persona que desea ejecutarlo y probarlo porque, desafortunadamente, el usuario tendrá que realizar la configuración desde cero cada vez que realice cambios significativos en las bibliotecas y dependencias.

¿Qué es Docker?

Aquí es donde entran en juego la creación de contenedores y Docker . Docker es una plataforma de creación de contenedores increíblemente popular que resuelve el problema de las bibliotecas y las dependencias de una vez por todas.

¿Pero su mejor característica? Independientemente del host o la infraestructura subyacente, su aplicación en contenedores siempre se ejecutará de la misma manera.

Esta guía lo guiará a través de la configuración de un proyecto Django con Docker.

¿Por qué debería usar Docker?

Docker es un producto que ofrece virtualización de hardware a nivel de Sistema Operativo (SO). Esta capacidad permite a los desarrolladores empaquetar y enviar software y sus dependencias para distribuirlos como contenedores .

En términos simples, ahora puede empaquetar todas las piezas que su software necesita en una sola unidad llamada imagen acoplable y luego enviar o compartir esta imagen con cualquier persona. Y, siempre que el destinatario tenga Docker, podrá ejecutar o probar su proyecto. Atrás quedaron los días de "¡Pero funcionó en mi máquina!"

Docker también ofrece un servicio llamado DockerHub que permite compartir y administrar imágenes de Docker entre desarrolladores y comunidades; esencialmente, un "GitHub" para imágenes de Docker. Comparte algunas similitudes con la plataforma de repositorio de código, como la carga y descarga de imágenes a través de los comandos CLI contenidos en la CLI de Docker.

Requisitos previos para usar Docker

  • Competencia en el desarrollo de Django.
  • Nivel intermedio con CLI y bash

Para este tutorial, las secuencias de comandos de Docker se realizan en YAMLarchivos y los archivos se ejecutan a través de la CLI de Docker.

Esta guía explorará la configuración de Docker en una máquina con Ubuntu.

Para otras plataformas de SO comunes:

  1. Windows, sigue esta página.
  2. MacOs, sigue esta página.

Para descargar y configurar Docker, ejecute las siguientes instrucciones:

sudo apt-get update  
sudo apt-get install docker-ce docker-ce-cli containerd.io  

Aplicación Django

Debido a que esta guía asume que ya dominas Django, demostremos los pasos para ejecutar una aplicación Django Rest Framework básica en Docker y mostrar la página predeterminada.

Considéralo el Hello worldde Django y Docker. Después de esto, puede dockerizar cualquier proyecto Django anterior o futuro que pueda tener, especialmente uno que tenga bibliotecas enumeradas en requirements.txt.

Para comenzar, ejecute los siguientes comandos para configurar la aplicación Django.

django-admin startproject dj_docker_drf

Navegue a la carpeta de su proyecto, inicie una aplicación llamada sampley agregue rest_frameworky samplea la INSTALLED_APPSlista en settings.py.

En el views.pyarchivo, escriba una función que devuelva el mensaje "HOLA MUNDO DE DJANGO Y DOCKER"

from rest_framework.views import APIView  
from django.http import JsonResponse  

class HomeView(APIView):  

 def get(self, request, format=None):
    return JsonResponse({"message":
    'HELLO WORLD FROM DJANGO AND DOCKER'})  

Conecte el archivo de URL principal y el archivo de URL de la aplicación de modo que HomeViewsea la vista predeterminada a la que se accede cuando un usuario accede a la aplicación en el navegador.

Un paso crítico que muchos olvidan es configurar el ALLOWED_HOSTS'*' para permitir el acceso a la aplicación Django desde cualquier IP. El fragmento de código se comparte a continuación

ALLOWED_HOSTS = [‘*’]

Finalmente, cree un requirements.txtarchivo en la carpeta de su proyecto raíz donde normalmente se encuentra y agregue la biblioteca DRF:

django-rest-framework==0.1.0  

La aplicación ahora está lista para dockerizarse.

Crear los archivos de Docker y la CLI de Docker

Observe que el archivo Docker tiene un nombre. Esto es para permitir que la CLI de Docker lo rastree.

En la raíz de su proyecto, cree un archivo llamado Dockerfiley ábralo. Las directivas de Docker han sido explicadas por los comentarios:

# base image  
FROM python:3.8   
# setup environment variable  
ENV DockerHOME=/home/app/webapp  

# set work directory  
RUN mkdir -p $DockerHOME  

# where your code lives  
WORKDIR $DockerHOME  

# set environment variables  
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1  
# install dependencies  
RUN pip install --upgrade pip  
# copy whole project to your docker home directory. COPY . $DockerHOME  
# run this command to install all dependencies  
RUN pip install -r requirements.txt  
# port where the Django app runs  
EXPOSE 8000  
# start server  
CMD python manage.py runserver  

Ejecutando la aplicación en Docker

Para ejecutar la aplicación, debe realizar solo dos pasos.

  1. Cree la imagen : Esto se hace usando el buildcomando, que usa el Dockerfileque acaba de crear. Para construir la imagen, ejecute el comando docker build . -t docker-django-v0.0. Este comando debe ejecutarse en el directorio donde se encuentra el archivo Docker. La -tbandera etiqueta la imagen para que se pueda hacer referencia a ella cuando desee ejecutar el contenedor.
  2. Ejecutar la imagen : Esto se hace usando el docker runcomando. Esto convertirá la imagen construida en un contenedor en ejecución

¡Ahora, la aplicación está lista para usar!

Para ejecutar la aplicación, ejecute el comando docker run docker-django-v0.0y visualice su aplicación en el navegador en 0.0.0.0:8000.

Ejecución de varios contenedores con Docker Compose

Con la competencia adquirida en Docker, el siguiente paso lógico es saber cómo ejecutar varios contenedores y en qué orden.

Este es el caso de uso perfecto para Docker Compose , que es una herramienta utilizada para definir y ejecutar aplicaciones de contenedores múltiples de cualquier tipo. En pocas palabras, si su aplicación tiene varios contenedores, utilizará la CLI de Docker-Compose para ejecutarlos todos en el orden requerido.

Tomemos, por ejemplo, una aplicación web con los siguientes componentes:

  1. Contenedor de servidor web como NGINX
  2. Contenedor de aplicaciones que aloja la aplicación Django
  3. Contenedor de base de datos que aloja la base de datos de producción, como POSTGRES
  4. Contenedor de mensajes que aloja el agente de mensajes como RabbitMq

Para ejecutar un sistema como este, declarará directivas en un archivo compuesto por Docker YAMLdonde indicará cómo se crearán las imágenes, en qué puerto se podrá acceder a las imágenes y, lo que es más importante, el orden en que se ejecutarán los contenedores (es decir, , qué contenedor depende de qué contenedor se ejecute el proyecto).

En este ejemplo particular, ¿puede hacer una conjetura calculada? ¿Cuál debería ser el primer contenedor en girar y qué contenedor depende del otro?

Para responder a esta pregunta, exploraremos Docker Compose . Primero, siga esta guía para instalar la herramienta CLI en su sistema operativo host.

Con Docker Compose (y de manera similar a Docker), se requiere un archivo particular con un nombre especial. Esto es lo que usa la herramienta CLI para girar las imágenes y ejecutarlas.

Para crear un archivo Docker Compose, cree un archivo YAML y asígnele el nombre docker-compose.yml. Idealmente, esto debería existir en el directorio raíz de su proyecto. Para comprender mejor este proceso, exploremos Docker Compose usando el escenario que se muestra arriba: una aplicación Django con una base de datos de Postgres, un intermediario de mensajes RabbitMQ y un balanceador de carga NGINX.

Usando Docker Compose con una aplicación Django

version: '3.7'

services: # the different images that will be running as containers
  nginx: # service name
    build: ./nginx # location of the dockerfile that defines the nginx image. The dockerfile will be used to spin up an image during the build stage
    ports:
      - 1339:80 # map the external port 1339 to the internal port 80. Any traffic from 1339 externally will be passed to port 80 of the NGINX container. To access this app, one would use an address such as 0.0.0.0:1339
    volumes: # static storages provisioned since django does not handle static files in production
      - static_volume:/home/app/microservice/static # provide a space for static files
    depends_on:
      - web # will only start if web is up and running
    restart: "on-failure" # restart service when it fails
  web: # service name
    build: . #build the image for the web service from the dockerfile in parent directory.
    # command directive passes the parameters to the service and they will be executed by the service. In this example, these are django commands which will be executed in the container where django lives.
    command: sh -c "python manage.py makemigrations &&
                    python manage.py migrate &&
                    gunicorn microservice_sample_app.wsgi:application --bind 0.0.0.0:${APP_PORT}" # Django commands to run app using gunicorn
    volumes:
      - .:/microservice # map data and files from parent directory in host to microservice directory in docker container
      - static_volume:/home/app/microservice/static
    env_file: # file where env variables are stored. Used as best practice so as not to expose secret keys
      - .env # name of the env file
    image: microservice_app # name of the image

    expose: # expose the port to other services defined here so that they can access this service via the exposed port. In the case of Django, this is 8000 by default
      - ${APP_PORT} # retrieved from the .env file
    restart: "on-failure"
    depends_on: # cannot start if db service is not up and running
      - db
  db: # service name
    image: postgres:11-alpine # image name of the postgres database. during build, this will be pulled from dockerhub and a container spun up from it
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      - postgres_data:/var/lib/postgresql/data/
    environment: # access credentials from the .env file
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${DB_NAME}
      - PGPORT=${DB_PORT}
      - POSTGRES_USER=${POSTGRES_USER}
    restart: "on-failure"
  rabbitmq:
        image: rabbitmq:3-management-alpine # image to be pulled from dockerhub during building
        container_name: rabbitmq # container name
        volumes: # assign static storage for rabbitmq to run
           rabbitmq: - ./.docker/rabbitmq/etc/:/etc/rabbitmq/
            - ./.docker/rabbitmq/data/:/var/lib/rabbitmq/
           rabbitmq_logs:  - ./.docker/rabbitmq/logs/:/var/log/rabbitmq/
        environment: # environment variables from the referenced .env file
            RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE}
            # auth cretendials
            RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER} 
            RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
        ports: # map external ports to this specific container's internal ports
            - 5672:5672
            - 15672:15672
        depends_on: # can only start if web service is running
            - web


volumes:
  postgres_data:
  static_volume:
  rabbitmq:
  rabbitmq_logs:

Uno de los aspectos más destacados de Docker Compose es la depends_ondirectiva. Del script anterior, podemos deducir que:

  • NGINX depende de la web
  • Web depende de db
  • RabbitMQ depende de la web

Con esta configuración, db será el primero en iniciarse, seguido de web, seguido de RabbitMQ y, por último, NGINX. Cuando decidas destruir el medio ambiente y detener los contenedores en funcionamiento, el orden será inverso. NGINX será el primero en ejecutarse y db el último.

Creación y ejecución de scripts de Docker Compose

Al igual que una secuencia de comandos de Docker, la secuencia de comandos de Docker Compose tiene una estructura similar en la que tiene comandos de compilación y ejecución. El comando de compilación compilará todas las imágenes definidas servicesen el orden de la jerarquía de dependencia. El comando ejecutar hará girar los contenedores en el orden de la jerarquía de dependencia.

Afortunadamente, hay un comando que combina construir y ejecutar. se llama up. Para ejecutar este comando, ejecute el siguiente comando:

 docker-compose up

También puede agregar el indicador de compilación. Esto es útil cuando ha ejecutado este comando antes y desea crear nuevas imágenes desde el principio.

docker-compose up —build

Una vez que haya terminado con los contenedores, es posible que desee cerrarlos todos y eliminar cualquier almacenamiento estático que estuvieran usando, por ejemplo, el volumen estático de postgres. Para hacer esto, ejecute el siguiente comando:

ventana acoplable-componer abajo -V

La -Vbandera representa volúmenes. Esto garantiza que los contenedores y los volúmenes adjuntos estén cerrados.

Siga la documentación oficial para obtener más información sobre varios comandos de Docker Compose y su uso.

Archivos de soporte en Docker

Hay algunos archivos a los que se hace referencia en el script anterior que hacen que el archivo sea menos voluminoso, lo que facilita la administración del código. Estos incluyen el archivo ENV y NGINX Docker y los archivos de configuración. A continuación se muestran ejemplos de lo que implica cada uno:

Archivo ENV
El objetivo principal de este archivo es almacenar variables como claves y credenciales. Esta es una práctica de codificación segura que garantiza que sus claves personales no queden expuestas.

#Django
SECRET_KEY="my_secret_key"
DEBUG=1
ALLOWED_HOSTS=localhost 127.0.0.1 0.0.0.0 [::1] *


# database access credentials
ENGINE=django.db.backends.postgresql
DB_NAME=testdb
POSTGRES_USER=testuser
POSTGRES_PASSWORD=testpassword
DB_HOST=db
DB_PORT=5432
APP_PORT=8000
#superuser details
DJANGO_SU_NAME=test
DJANGO_SU_EMAIL=admin12@admin.com
DJANGO_SU_PASSWORD=mypass123
#rabbitmq
RABBITMQ_ERLANG_COOKIE: test_cookie
RABBITMQ_DEFAULT_USER: default_user
RABBITMQ_DEFAULT_PASS: sample_password

Archivo acoplable NGINX

Esto está alojado en una nginxcarpeta en el directorio raíz. Contiene principalmente dos directivas: el nombre de la imagen que se extraerá de Docker hub y la ubicación de los archivos de configuración. Se llama como cualquier otro archivo de Docker, Dockerfile.

FROM nginx:1.19.0-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d

archivo de configuración NGINX

Aquí es donde se escribe la lógica de configuración de NGINX. Se coloca en la misma ubicación que el archivo NGINX Docker en la carpeta NGINX.

Este archivo es lo que dicta cómo se comportará el contenedor NGINX. A continuación se muestra un script de muestra que se encuentra en un archivo comúnmente llamado nginx.conf.

upstream microservice { # name of our web image
    server web:8000; # default django port
}

server {

    listen 80; # default external port. Anything coming from port 80 will go through NGINX

    location / {
        proxy_pass http://microservice_app;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }
    location /static/ {
        alias /home/app/microservice/static/; # where our static files are hosted
    }

}

Conclusión

Los consejos y trucos de Docker en esta guía son vitales para DevOps y los puestos de desarrollador de pila completa en cualquier organización, y Docker también es una herramienta conveniente para los desarrolladores de back-end.

Debido a que Docker empaqueta dependencias y bibliotecas, los nuevos desarrolladores no necesariamente necesitan instalar varias dependencias y perder un tiempo precioso tratando de que las bibliotecas y dependencias funcionen. Gracias por leer.

Fuente: https://blog.logrocket.com/dockerizing-a-django-app/

#django  #docker 

Dockerización De Una Aplicación Django

Face Recognition with OpenCV and Python

Introduction

What is face recognition? Or what is recognition? When you look at an apple fruit, your mind immediately tells you that this is an apple fruit. This process, your mind telling you that this is an apple fruit is recognition in simple words. So what is face recognition then? I am sure you have guessed it right. When you look at your friend walking down the street or a picture of him, you recognize that he is your friend Paulo. Interestingly when you look at your friend or a picture of him you look at his face first before looking at anything else. Ever wondered why you do that? This is so that you can recognize him by looking at his face. Well, this is you doing face recognition.

But the real question is how does face recognition works? It is quite simple and intuitive. Take a real life example, when you meet someone first time in your life you don't recognize him, right? While he talks or shakes hands with you, you look at his face, eyes, nose, mouth, color and overall look. This is your mind learning or training for the face recognition of that person by gathering face data. Then he tells you that his name is Paulo. At this point your mind knows that the face data it just learned belongs to Paulo. Now your mind is trained and ready to do face recognition on Paulo's face. Next time when you will see Paulo or his face in a picture you will immediately recognize him. This is how face recognition work. The more you will meet Paulo, the more data your mind will collect about Paulo and especially his face and the better you will become at recognizing him.

Now the next question is how to code face recognition with OpenCV, after all this is the only reason why you are reading this article, right? OK then. You might say that our mind can do these things easily but to actually code them into a computer is difficult? Don't worry, it is not. Thanks to OpenCV, coding face recognition is as easier as it feels. The coding steps for face recognition are same as we discussed it in real life example above.

  • Training Data Gathering: Gather face data (face images in this case) of the persons you want to recognize
  • Training of Recognizer: Feed that face data (and respective names of each face) to the face recognizer so that it can learn.
  • Recognition: Feed new faces of the persons and see if the face recognizer you just trained recognizes them.

OpenCV comes equipped with built in face recognizer, all you have to do is feed it the face data. It's that simple and this how it will look once we are done coding it.

visualization

OpenCV Face Recognizers

OpenCV has three built in face recognizers and thanks to OpenCV's clean coding, you can use any of them by just changing a single line of code. Below are the names of those face recognizers and their OpenCV calls.

  1. EigenFaces Face Recognizer Recognizer - cv2.face.createEigenFaceRecognizer()
  2. FisherFaces Face Recognizer Recognizer - cv2.face.createFisherFaceRecognizer()
  3. Local Binary Patterns Histograms (LBPH) Face Recognizer - cv2.face.createLBPHFaceRecognizer()

We have got three face recognizers but do you know which one to use and when? Or which one is better? I guess not. So why not go through a brief summary of each, what you say? I am assuming you said yes :) So let's dive into the theory of each.

EigenFaces Face Recognizer

This algorithm considers the fact that not all parts of a face are equally important and equally useful. When you look at some one you recognize him/her by his distinct features like eyes, nose, cheeks, forehead and how they vary with respect to each other. So you are actually focusing on the areas of maximum change (mathematically speaking, this change is variance) of the face. For example, from eyes to nose there is a significant change and same is the case from nose to mouth. When you look at multiple faces you compare them by looking at these parts of the faces because these parts are the most useful and important components of a face. Important because they catch the maximum change among faces, change the helps you differentiate one face from the other. This is exactly how EigenFaces face recognizer works.

EigenFaces face recognizer looks at all the training images of all the persons as a whole and try to extract the components which are important and useful (the components that catch the maximum variance/change) and discards the rest of the components. This way it not only extracts the important components from the training data but also saves memory by discarding the less important components. These important components it extracts are called principal components. Below is an image showing the principal components extracted from a list of faces.

Principal Components eigenfaces_opencv source

You can see that principal components actually represent faces and these faces are called eigen faces and hence the name of the algorithm.

So this is how EigenFaces face recognizer trains itself (by extracting principal components). Remember, it also keeps a record of which principal component belongs to which person. One thing to note in above image is that Eigenfaces algorithm also considers illumination as an important component.

Later during recognition, when you feed a new image to the algorithm, it repeats the same process on that image as well. It extracts the principal component from that new image and compares that component with the list of components it stored during training and finds the component with the best match and returns the person label associated with that best match component.

Easy peasy, right? Next one is easier than this one.

FisherFaces Face Recognizer

This algorithm is an improved version of EigenFaces face recognizer. Eigenfaces face recognizer looks at all the training faces of all the persons at once and finds principal components from all of them combined. By capturing principal components from all the of them combined you are not focusing on the features that discriminate one person from the other but the features that represent all the persons in the training data as a whole.

This approach has drawbacks, for example, images with sharp changes (like light changes which is not a useful feature at all) may dominate the rest of the images and you may end up with features that are from external source like light and are not useful for discrimination at all. In the end, your principal components will represent light changes and not the actual face features.

Fisherfaces algorithm, instead of extracting useful features that represent all the faces of all the persons, it extracts useful features that discriminate one person from the others. This way features of one person do not dominate over the others and you have the features that discriminate one person from the others.

Below is an image of features extracted using Fisherfaces algorithm.

Fisher Faces eigenfaces_opencv source

You can see that features extracted actually represent faces and these faces are called fisher faces and hence the name of the algorithm.

One thing to note here is that even in Fisherfaces algorithm if multiple persons have images with sharp changes due to external sources like light they will dominate over other features and affect recognition accuracy.

Getting bored with this theory? Don't worry, only one face recognizer is left and then we will dive deep into the coding part.

Local Binary Patterns Histograms (LBPH) Face Recognizer

I wrote a detailed explaination on Local Binary Patterns Histograms in my previous article on face detection using local binary patterns histograms. So here I will just give a brief overview of how it works.

We know that Eigenfaces and Fisherfaces are both affected by light and in real life we can't guarantee perfect light conditions. LBPH face recognizer is an improvement to overcome this drawback.

Idea is to not look at the image as a whole instead find the local features of an image. LBPH alogrithm try to find the local structure of an image and it does that by comparing each pixel with its neighboring pixels.

Take a 3x3 window and move it one image, at each move (each local part of an image), compare the pixel at the center with its neighbor pixels. The neighbors with intensity value less than or equal to center pixel are denoted by 1 and others by 0. Then you read these 0/1 values under 3x3 window in a clockwise order and you will have a binary pattern like 11100011 and this pattern is local to some area of the image. You do this on whole image and you will have a list of local binary patterns.

LBP Labeling LBP labeling

Now you get why this algorithm has Local Binary Patterns in its name? Because you get a list of local binary patterns. Now you may be wondering, what about the histogram part of the LBPH? Well after you get a list of local binary patterns, you convert each binary pattern into a decimal number (as shown in above image) and then you make a histogram of all of those values. A sample histogram looks like this.

Sample Histogram LBP labeling

I guess this answers the question about histogram part. So in the end you will have one histogram for each face image in the training data set. That means if there were 100 images in training data set then LBPH will extract 100 histograms after training and store them for later recognition. Remember, algorithm also keeps track of which histogram belongs to which person.

Later during recognition, when you will feed a new image to the recognizer for recognition it will generate a histogram for that new image, compare that histogram with the histograms it already has, find the best match histogram and return the person label associated with that best match histogram. 

Below is a list of faces and their respective local binary patterns images. You can see that the LBP images are not affected by changes in light conditions.

LBP Faces LBP faces source

The theory part is over and now comes the coding part! Ready to dive into coding? Let's get into it then.

Coding Face Recognition with OpenCV

The Face Recognition process in this tutorial is divided into three steps.

  1. Prepare training data: In this step we will read training images for each person/subject along with their labels, detect faces from each image and assign each detected face an integer label of the person it belongs to.
  2. Train Face Recognizer: In this step we will train OpenCV's LBPH face recognizer by feeding it the data we prepared in step 1.
  3. Testing: In this step we will pass some test images to face recognizer and see if it predicts them correctly.

[There should be a visualization diagram for above steps here]

To detect faces, I will use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how face detection works and its Python coding.

Import Required Modules

Before starting the actual coding we need to import the required modules for coding. So let's import them first.

  • cv2: is OpenCV module for Python which we will use for face detection and face recognition.
  • os: We will use this Python module to read our training directories and file names.
  • numpy: We will use this module to convert Python lists to numpy arrays as OpenCV face recognizers accept numpy arrays.
#import OpenCV module
import cv2
#import os module for reading training data directories and paths
import os
#import numpy to convert python lists to numpy arrays as 
#it is needed by OpenCV face recognizers
import numpy as np

#matplotlib for display our images
import matplotlib.pyplot as plt
%matplotlib inline 

Training Data

The more images used in training the better. Normally a lot of images are used for training a face recognizer so that it can learn different looks of the same person, for example with glasses, without glasses, laughing, sad, happy, crying, with beard, without beard etc. To keep our tutorial simple we are going to use only 12 images for each person.

So our training data consists of total 2 persons with 12 images of each person. All training data is inside training-data folder. training-data folder contains one folder for each person and each folder is named with format sLabel (e.g. s1, s2) where label is actually the integer label assigned to that person. For example folder named s1 means that this folder contains images for person 1. The directory structure tree for training data is as follows:

training-data
|-------------- s1
|               |-- 1.jpg
|               |-- ...
|               |-- 12.jpg
|-------------- s2
|               |-- 1.jpg
|               |-- ...
|               |-- 12.jpg

The test-data folder contains images that we will use to test our face recognizer after it has been successfully trained.

As OpenCV face recognizer accepts labels as integers so we need to define a mapping between integer labels and persons actual names so below I am defining a mapping of persons integer labels and their respective names.

Note: As we have not assigned label 0 to any person so the mapping for label 0 is empty.

#there is no label 0 in our training data so subject name for index/label 0 is empty
subjects = ["", "Tom Cruise", "Shahrukh Khan"]

Prepare training data

You may be wondering why data preparation, right? Well, OpenCV face recognizer accepts data in a specific format. It accepts two vectors, one vector is of faces of all the persons and the second vector is of integer labels for each face so that when processing a face the face recognizer knows which person that particular face belongs too.

For example, if we had 2 persons and 2 images for each person.

PERSON-1    PERSON-2   

img1        img1         
img2        img2

Then the prepare data step will produce following face and label vectors.

FACES                        LABELS

person1_img1_face              1
person1_img2_face              1
person2_img1_face              2
person2_img2_face              2

Preparing data step can be further divided into following sub-steps.

  1. Read all the folder names of subjects/persons provided in training data folder. So for example, in this tutorial we have folder names: s1, s2.
  2. For each subject, extract label number. Do you remember that our folders have a special naming convention? Folder names follow the format sLabel where Label is an integer representing the label we have assigned to that subject. So for example, folder name s1 means that the subject has label 1, s2 means subject label is 2 and so on. The label extracted in this step is assigned to each face detected in the next step.
  3. Read all the images of the subject, detect face from each image.
  4. Add each face to faces vector with corresponding subject label (extracted in above step) added to labels vector.

[There should be a visualization for above steps here]

Did you read my last article on face detection? No? Then you better do so right now because to detect faces, I am going to use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how face detection works and its coding. Below is the same code.

#function to detect face using OpenCV
def detect_face(img):
    #convert the test image to gray image as opencv face detector expects gray images
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
    #load OpenCV face detector, I am using LBP which is fast
    #there is also a more accurate but slow Haar classifier
    face_cascade = cv2.CascadeClassifier('opencv-files/lbpcascade_frontalface.xml')

    #let's detect multiscale (some images may be closer to camera than others) images
    #result is a list of faces
    faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
    
    #if no faces are detected then return original img
    if (len(faces) == 0):
        return None, None
    
    #under the assumption that there will be only one face,
    #extract the face area
    (x, y, w, h) = faces[0]
    
    #return only the face part of the image
    return gray[y:y+w, x:x+h], faces[0]

I am using OpenCV's LBP face detector. On line 4, I convert the image to grayscale because most operations in OpenCV are performed in gray scale, then on line 8 I load LBP face detector using cv2.CascadeClassifier class. After that on line 12 I use cv2.CascadeClassifier class' detectMultiScale method to detect all the faces in the image. on line 20, from detected faces I only pick the first face because in one image there will be only one face (under the assumption that there will be only one prominent face). As faces returned by detectMultiScale method are actually rectangles (x, y, width, height) and not actual faces images so we have to extract face image area from the main image. So on line 23 I extract face area from gray image and return both the face image area and face rectangle.

Now you have got a face detector and you know the 4 steps to prepare the data, so are you ready to code the prepare data step? Yes? So let's do it.

#this function will read all persons' training images, detect face from each image
#and will return two lists of exactly same size, one list 
# of faces and another list of labels for each face
def prepare_training_data(data_folder_path):
    
    #------STEP-1--------
    #get the directories (one directory for each subject) in data folder
    dirs = os.listdir(data_folder_path)
    
    #list to hold all subject faces
    faces = []
    #list to hold labels for all subjects
    labels = []
    
    #let's go through each directory and read images within it
    for dir_name in dirs:
        
        #our subject directories start with letter 's' so
        #ignore any non-relevant directories if any
        if not dir_name.startswith("s"):
            continue;
            
        #------STEP-2--------
        #extract label number of subject from dir_name
        #format of dir name = slabel
        #, so removing letter 's' from dir_name will give us label
        label = int(dir_name.replace("s", ""))
        
        #build path of directory containin images for current subject subject
        #sample subject_dir_path = "training-data/s1"
        subject_dir_path = data_folder_path + "/" + dir_name
        
        #get the images names that are inside the given subject directory
        subject_images_names = os.listdir(subject_dir_path)
        
        #------STEP-3--------
        #go through each image name, read image, 
        #detect face and add face to list of faces
        for image_name in subject_images_names:
            
            #ignore system files like .DS_Store
            if image_name.startswith("."):
                continue;
            
            #build image path
            #sample image path = training-data/s1/1.pgm
            image_path = subject_dir_path + "/" + image_name

            #read image
            image = cv2.imread(image_path)
            
            #display an image window to show the image 
            cv2.imshow("Training on image...", image)
            cv2.waitKey(100)
            
            #detect face
            face, rect = detect_face(image)
            
            #------STEP-4--------
            #for the purpose of this tutorial
            #we will ignore faces that are not detected
            if face is not None:
                #add face to list of faces
                faces.append(face)
                #add label for this face
                labels.append(label)
            
    cv2.destroyAllWindows()
    cv2.waitKey(1)
    cv2.destroyAllWindows()
    
    return faces, labels

I have defined a function that takes the path, where training subjects' folders are stored, as parameter. This function follows the same 4 prepare data substeps mentioned above.

(step-1) On line 8 I am using os.listdir method to read names of all folders stored on path passed to function as parameter. On line 10-13 I am defining labels and faces vectors.

(step-2) After that I traverse through all subjects' folder names and from each subject's folder name on line 27 I am extracting the label information. As folder names follow the sLabel naming convention so removing the letter s from folder name will give us the label assigned to that subject.

(step-3) On line 34, I read all the images names of of the current subject being traversed and on line 39-66 I traverse those images one by one. On line 53-54 I am using OpenCV's imshow(window_title, image) along with OpenCV's waitKey(interval) method to display the current image being traveresed. The waitKey(interval) method pauses the code flow for the given interval (milliseconds), I am using it with 100ms interval so that we can view the image window for 100ms. On line 57, I detect face from the current image being traversed.

(step-4) On line 62-66, I add the detected face and label to their respective vectors.

But a function can't do anything unless we call it on some data that it has to prepare, right? Don't worry, I have got data of two beautiful and famous celebrities. I am sure you will recognize them!

training-data

Let's call this function on images of these beautiful celebrities to prepare data for training of our Face Recognizer. Below is a simple code to do that.

#let's first prepare our training data
#data will be in two lists of same size
#one list will contain all the faces
#and other list will contain respective labels for each face
print("Preparing data...")
faces, labels = prepare_training_data("training-data")
print("Data prepared")

#print total faces and labels
print("Total faces: ", len(faces))
print("Total labels: ", len(labels))
Preparing data...
Data prepared
Total faces:  23
Total labels:  23

This was probably the boring part, right? Don't worry, the fun stuff is coming up next. It's time to train our own face recognizer so that once trained it can recognize new faces of the persons it was trained on. Read? Ok then let's train our face recognizer.

Train Face Recognizer

As we know, OpenCV comes equipped with three face recognizers.

  1. EigenFace Recognizer: This can be created with cv2.face.createEigenFaceRecognizer()
  2. FisherFace Recognizer: This can be created with cv2.face.createFisherFaceRecognizer()
  3. Local Binary Patterns Histogram (LBPH): This can be created with cv2.face.LBPHFisherFaceRecognizer()

I am going to use LBPH face recognizer but you can use any face recognizer of your choice. No matter which of the OpenCV's face recognizer you use the code will remain the same. You just have to change one line, the face recognizer initialization line given below.

#create our LBPH face recognizer 
face_recognizer = cv2.face.createLBPHFaceRecognizer()

#or use EigenFaceRecognizer by replacing above line with 
#face_recognizer = cv2.face.createEigenFaceRecognizer()

#or use FisherFaceRecognizer by replacing above line with 
#face_recognizer = cv2.face.createFisherFaceRecognizer()

Now that we have initialized our face recognizer and we also have prepared our training data, it's time to train the face recognizer. We will do that by calling the train(faces-vector, labels-vector) method of face recognizer.

#train our face recognizer of our training faces
face_recognizer.train(faces, np.array(labels))

Did you notice that instead of passing labels vector directly to face recognizer I am first converting it to numpy array? This is because OpenCV expects labels vector to be a numpy array.

Still not satisfied? Want to see some action? Next step is the real action, I promise!

Prediction

Now comes my favorite part, the prediction part. This is where we actually get to see if our algorithm is actually recognizing our trained subjects's faces or not. We will take two test images of our celeberities, detect faces from each of them and then pass those faces to our trained face recognizer to see if it recognizes them.

Below are some utility functions that we will use for drawing bounding box (rectangle) around face and putting celeberity name near the face bounding box.

#function to draw rectangle on image 
#according to given (x, y) coordinates and 
#given width and heigh
def draw_rectangle(img, rect):
    (x, y, w, h) = rect
    cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
    
#function to draw text on give image starting from
#passed (x, y) coordinates. 
def draw_text(img, text, x, y):
    cv2.putText(img, text, (x, y), cv2.FONT_HERSHEY_PLAIN, 1.5, (0, 255, 0), 2)

First function draw_rectangle draws a rectangle on image based on passed rectangle coordinates. It uses OpenCV's built in function cv2.rectangle(img, topLeftPoint, bottomRightPoint, rgbColor, lineWidth) to draw rectangle. We will use it to draw a rectangle around the face detected in test image.

Second function draw_text uses OpenCV's built in function cv2.putText(img, text, startPoint, font, fontSize, rgbColor, lineWidth) to draw text on image.

Now that we have the drawing functions, we just need to call the face recognizer's predict(face) method to test our face recognizer on test images. Following function does the prediction for us.

#this function recognizes the person in image passed
#and draws a rectangle around detected face with name of the 
#subject
def predict(test_img):
    #make a copy of the image as we don't want to chang original image
    img = test_img.copy()
    #detect face from the image
    face, rect = detect_face(img)

    #predict the image using our face recognizer 
    label= face_recognizer.predict(face)
    #get name of respective label returned by face recognizer
    label_text = subjects[label]
    
    #draw a rectangle around face detected
    draw_rectangle(img, rect)
    #draw name of predicted person
    draw_text(img, label_text, rect[0], rect[1]-5)
    
    return img
  • line-6 read the test image
  • line-7 detect face from test image
  • line-11 recognize the face by calling face recognizer's predict(face) method. This method will return a lable
  • line-12 get the name associated with the label
  • line-16 draw rectangle around the detected face
  • line-18 draw name of predicted subject above face rectangle

Now that we have the prediction function well defined, next step is to actually call this function on our test images and display those test images to see if our face recognizer correctly recognized them. So let's do it. This is what we have been waiting for.

print("Predicting images...")

#load test images
test_img1 = cv2.imread("test-data/test1.jpg")
test_img2 = cv2.imread("test-data/test2.jpg")

#perform a prediction
predicted_img1 = predict(test_img1)
predicted_img2 = predict(test_img2)
print("Prediction complete")

#create a figure of 2 plots (one for each test image)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))

#display test image1 result
ax1.imshow(cv2.cvtColor(predicted_img1, cv2.COLOR_BGR2RGB))

#display test image2 result
ax2.imshow(cv2.cvtColor(predicted_img2, cv2.COLOR_BGR2RGB))

#display both images
cv2.imshow("Tom cruise test", predicted_img1)
cv2.imshow("Shahrukh Khan test", predicted_img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.destroyAllWindows()
Predicting images...
Prediction complete

wohooo! Is'nt it beautiful? Indeed, it is!

End Notes

Face Recognition is a fascinating idea to work on and OpenCV has made it extremely simple and easy for us to code it. It just takes a few lines of code to have a fully working face recognition application and we can switch between all three face recognizers with a single line of code change. It's that simple.

Although EigenFaces, FisherFaces and LBPH face recognizers are good but there are even better ways to perform face recognition like using Histogram of Oriented Gradients (HOGs) and Neural Networks. So the more advanced face recognition algorithms are now a days implemented using a combination of OpenCV and Machine learning. I have plans to write some articles on those more advanced methods as well, so stay tuned!

Download Details:
Author: informramiz
Source Code: https://github.com/informramiz/opencv-face-recognition-python
License: MIT License

#opencv  #python #facerecognition 

Face Recognition with OpenCV and Python