ManageTeamz - Arabic Version

ManageTeamz is ready for rollout with Arabic version.
Explore more with free signup: https://bit.ly/3vkDFQQ

#AppLocalization #DeliverySoftware #TrackingSoftware #Dubai #UAE #MiddleEast

What is GEEK

Buddha Community

ManageTeamz - Arabic Version
Fredy  Larson

Fredy Larson

1601480520

Database Schema Versioning and Migrations Made Simpler For High Speed CI/CD

If you are a back-end developer, you are often faced with having to migrate your database schema with each new release.

The framework called Liquibase can make it easier for you when you need to upgrade your database schema.

In this article, I’ll explain how Liquibase can be used in a Java project, with Spring/Hibernate, to version the database schema.

How does Liquibase work?

Changelog

Liquibase works with Changelog files (the list of all the changes, in order, that need to execute to update the database).

There are 4 supported formats for these Changelogs: SQL, XML, YAML, and JSON.

#liquibase #database #versioning #plugins #java #hackernoon-top-story #database-schema-versioning #database-schema-migration

Python  Library

Python Library

1641542520

Vulkan Kompute: General GPU Compute Framework for Graphics Cards

 

Kompute

The general purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends)

Blazing fast, mobile-enabled, asynchronous, and optimized for advanced GPU acceleration usecases.

💬 Join the Discord & Community Calls 🔋 Documentation 💻 Blog PostExamples 💾


Kompute is backed by the Linux Foundation as a hosted project by the LF AI & Data Foundation.

  

Principles & Features

Getting Started

Below you can find a GPU multiplication example using the C++ and Python Kompute interfaces.

You can join the Discord for questions / discussion, open a github issue, or read the documentation.

Your First Kompute (C++)

The C++ interface provides low level access to the native components of Kompute, enabling for advanced optimizations as well as extension of components.


void kompute(const std::string& shader) {

    // 1. Create Kompute Manager with default settings (device 0, first queue and no extensions)
    kp::Manager mgr; 

    // 2. Create and initialise Kompute Tensors through manager

    // Default tensor constructor simplifies creation of float values
    auto tensorInA = mgr.tensor({ 2., 2., 2. });
    auto tensorInB = mgr.tensor({ 1., 2., 3. });
    // Explicit type constructor supports uint32, int32, double, float and bool
    auto tensorOutA = mgr.tensorT<uint32_t>({ 0, 0, 0 });
    auto tensorOutB = mgr.tensorT<uint32_t>({ 0, 0, 0 });

    std::vector<std::shared_ptr<kp::Tensor>> params = {tensorInA, tensorInB, tensorOutA, tensorOutB};

    // 3. Create algorithm based on shader (supports buffers & push/spec constants)
    kp::Workgroup workgroup({3, 1, 1});
    std::vector<float> specConsts({ 2 });
    std::vector<float> pushConstsA({ 2.0 });
    std::vector<float> pushConstsB({ 3.0 });

    auto algorithm = mgr.algorithm(params,
                                   // See documentation shader section for compileSource
                                   compileSource(shader),
                                   workgroup,
                                   specConsts,
                                   pushConstsA);

    // 4. Run operation synchronously using sequence
    mgr.sequence()
        ->record<kp::OpTensorSyncDevice>(params)
        ->record<kp::OpAlgoDispatch>(algorithm) // Binds default push consts
        ->eval() // Evaluates the two recorded operations
        ->record<kp::OpAlgoDispatch>(algorithm, pushConstsB) // Overrides push consts
        ->eval(); // Evaluates only last recorded operation

    // 5. Sync results from the GPU asynchronously
    auto sq = mgr.sequence();
    sq->evalAsync<kp::OpTensorSyncLocal>(params);

    // ... Do other work asynchronously whilst GPU finishes

    sq->evalAwait();

    // Prints the first output which is: { 4, 8, 12 }
    for (const float& elem : tensorOutA->vector()) std::cout << elem << "  ";
    // Prints the second output which is: { 10, 10, 10 }
    for (const float& elem : tensorOutB->vector()) std::cout << elem << "  ";

} // Manages / releases all CPU and GPU memory resources

int main() {

    // Define a raw string shader (or use the Kompute tools to compile to SPIRV / C++ header
    // files). This shader shows some of the main components including constants, buffers, etc
    std::string shader = (R"(
        #version 450

        layout (local_size_x = 1) in;

        // The input tensors bind index is relative to index in parameter passed
        layout(set = 0, binding = 0) buffer buf_in_a { float in_a[]; };
        layout(set = 0, binding = 1) buffer buf_in_b { float in_b[]; };
        layout(set = 0, binding = 2) buffer buf_out_a { uint out_a[]; };
        layout(set = 0, binding = 3) buffer buf_out_b { uint out_b[]; };

        // Kompute supports push constants updated on dispatch
        layout(push_constant) uniform PushConstants {
            float val;
        } push_const;

        // Kompute also supports spec constants on initalization
        layout(constant_id = 0) const float const_one = 0;

        void main() {
            uint index = gl_GlobalInvocationID.x;
            out_a[index] += uint( in_a[index] * in_b[index] );
            out_b[index] += uint( const_one * push_const.val );
        }
    )");

    // Run the function declared above with our raw string shader
    kompute(shader);
}

Your First Kompute (Python)

The Python package provides a high level interactive interface that enables for experimentation whilst ensuring high performance and fast development workflows.


from .utils import compile_source # using util function from python/test/utils

def kompute(shader):
    # 1. Create Kompute Manager with default settings (device 0, first queue and no extensions)
    mgr = kp.Manager()

    # 2. Create and initialise Kompute Tensors through manager

    # Default tensor constructor simplifies creation of float values
    tensor_in_a = mgr.tensor([2, 2, 2])
    tensor_in_b = mgr.tensor([1, 2, 3])
    # Explicit type constructor supports uint32, int32, double, float and bool
    tensor_out_a = mgr.tensor_t(np.array([0, 0, 0], dtype=np.uint32))
    tensor_out_b = mgr.tensor_t(np.array([0, 0, 0], dtype=np.uint32))

    params = [tensor_in_a, tensor_in_b, tensor_out_a, tensor_out_b]

    # 3. Create algorithm based on shader (supports buffers & push/spec constants)
    workgroup = (3, 1, 1)
    spec_consts = [2]
    push_consts_a = [2]
    push_consts_b = [3]

    # See documentation shader section for compile_source
    spirv = compile_source(shader)

    algo = mgr.algorithm(params, spirv, workgroup, spec_consts, push_consts_a)

    # 4. Run operation synchronously using sequence
    (mgr.sequence()
        .record(kp.OpTensorSyncDevice(params))
        .record(kp.OpAlgoDispatch(algo)) # Binds default push consts provided
        .eval() # evaluates the two recorded ops
        .record(kp.OpAlgoDispatch(algo, push_consts_b)) # Overrides push consts
        .eval()) # evaluates only the last recorded op

    # 5. Sync results from the GPU asynchronously
    sq = mgr.sequence()
    sq.eval_async(kp.OpTensorSyncLocal(params))

    # ... Do other work asynchronously whilst GPU finishes

    sq.eval_await()

    # Prints the first output which is: { 4, 8, 12 }
    print(tensor_out_a)
    # Prints the first output which is: { 10, 10, 10 }
    print(tensor_out_b)

if __name__ == "__main__":

    # Define a raw string shader (or use the Kompute tools to compile to SPIRV / C++ header
    # files). This shader shows some of the main components including constants, buffers, etc
    shader = """
        #version 450

        layout (local_size_x = 1) in;

        // The input tensors bind index is relative to index in parameter passed
        layout(set = 0, binding = 0) buffer buf_in_a { float in_a[]; };
        layout(set = 0, binding = 1) buffer buf_in_b { float in_b[]; };
        layout(set = 0, binding = 2) buffer buf_out_a { uint out_a[]; };
        layout(set = 0, binding = 3) buffer buf_out_b { uint out_b[]; };

        // Kompute supports push constants updated on dispatch
        layout(push_constant) uniform PushConstants {
            float val;
        } push_const;

        // Kompute also supports spec constants on initalization
        layout(constant_id = 0) const float const_one = 0;

        void main() {
            uint index = gl_GlobalInvocationID.x;
            out_a[index] += uint( in_a[index] * in_b[index] );
            out_b[index] += uint( const_one * push_const.val );
        }
    """

    kompute(shader)

Interactive Notebooks & Hands on Videos

You are able to try out the interactive Colab Notebooks which allow you to use a free GPU. The available examples are the Python and C++ examples below:

Try the interactive C++ Colab from Blog PostTry the interactive Python Colab from Blog Post
  

You can also check out the two following talks presented at the FOSDEM 2021 conference.

Both videos have timestamps which will allow you to skip to the most relevant section for you - the intro & motivations for both is almost the same so you can skip to the more specific content.

Watch the video for C++ EnthusiastsWatch the video for Python & Machine Learning Enthusiasts
  

Architectural Overview

The core architecture of Kompute includes the following:

To see a full breakdown you can read further in the C++ Class Reference.

Full ArchitectureSimplified Kompute Components
 

(very tiny, check the full reference diagram in docs for details

Asynchronous and Parallel Operations

Kompute provides flexibility to run operations in an asynrchonous way through vk::Fences. Furthermore, Kompute enables for explicit allocation of queues, which allow for parallel execution of operations across queue families.

The image below provides an intuition on how Kompute Sequences can be allocated to different queues to enable parallel execution based on hardware. You can see the hands on example, as well as the detailed documentation page describing how it would work using an NVIDIA 1650 as an example.

Mobile Enabled

Kompute has been optimized to work in mobile environments. The build system enables for dynamic loading of the Vulkan shared library for Android environments, together with a working Android NDK wrapper for the CPP headers.

For a full deep dive you can read the blog post "Supercharging your Mobile Apps with On-Device GPU Accelerated Machine Learning".

You can also access the end-to-end example code in the repository, which can be run using android studio.

 

More examples

Simple examples

End-to-end examples

Python Package

Besides the C++ core SDK you can also use the Python package of Kompute, which exposes the same core functionality, and supports interoperability with Python objects like Lists, Numpy Arrays, etc.

The only dependencies are Python 3.5+ and Cmake 3.4.1+. You can install Kompute from the Python pypi package using the following command.

pip install kp

You can also install from master branch using:

pip install git+git://github.com/KomputeProject/kompute.git@master

For further details you can read the Python Package documentation or the Python Class Reference documentation.

C++ Build Overview

The build system provided uses cmake, which allows for cross platform builds.

The top level Makefile provides a set of optimized configurations for development as well as the docker image build, but you can start a build with the following command:

   cmake -Bbuild

You also are able to add Kompute in your repo with add_subdirectory - the Android example CMakeLists.txt file shows how this would be done.

For a more advanced overview of the build configuration check out the Build System Deep Dive documentation.

Kompute Development

We appreciate PRs and Issues. If you want to contribute try checking the "Good first issue" tag, but even using Kompute and reporting issues is a great contribution!

Contributing

Dev Dependencies

  • Testing
    • GTest
  • Documentation
    • Doxygen (with Dot)
    • Sphynx

Development

  • Follows Mozilla C++ Style Guide https://www-archive.mozilla.org/hacking/mozilla-style-guide.html
    • Uses post-commit hook to run the linter, you can set it up so it runs the linter before commit
    • All dependencies are defined in vcpkg.json
  • Uses cmake as build system, and provides a top level makefile with recommended command
  • Uses xxd (or xxd.exe windows 64bit port) to convert shader spirv to header files
  • Uses doxygen and sphinx for documentation and autodocs
  • Uses vcpkg for finding the dependencies, it's the recommended set up to retrieve the libraries

If you want to run with debug layers you can add them with the KOMPUTE_ENV_DEBUG_LAYERS parameter as:

export KOMPUTE_ENV_DEBUG_LAYERS="VK_LAYER_LUNARG_api_dump"

Updating documentation

To update the documentation you will need to:

  • Run the gendoxygen target in the build system
  • Run the gensphynx target in the build-system
  • Push to github pages with make push_docs_to_ghpages

Running tests

Running the unit tests has been significantly simplified for contributors.

The tests run on CPU, and can be triggered using the ACT command line interface (https://github.com/nektos/act) - once you install the command line (And start the Docker daemon) you just have to type:

$ act

[Python Tests/python-tests] 🚀  Start image=axsauze/kompute-builder:0.2
[C++ Tests/cpp-tests      ] 🚀  Start image=axsauze/kompute-builder:0.2
[C++ Tests/cpp-tests      ]   🐳  docker run image=axsauze/kompute-builder:0.2 entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
[Python Tests/python-tests]   🐳  docker run image=axsauze/kompute-builder:0.2 entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
...

The repository contains unit tests for the C++ and Python code, and can be found under the test/ and python/test folder.

The tests are currently run through the CI using Github Actions. It uses the images found in docker-builders/.

In order to minimise hardware requirements the tests can run without a GPU, directly in the CPU using Swiftshader.

For more information on how the CI and tests are setup, you can go to the CI, Docker and Tests Section in the documentation.

Motivations

This project started after seeing that a lot of new and renowned ML & DL projects like Pytorch, Tensorflow, Alibaba DNN, Tencent NCNN - among others - have either integrated or are looking to integrate the Vulkan SDK to add mobile (and cross-vendor) GPU support.

The Vulkan SDK offers a great low level interface that enables for highly specialized optimizations - however it comes at a cost of highly verbose code which requires 500-2000 lines of code to even begin writing application code. This has resulted in each of these projects having to implement the same baseline to abstract the non-compute related features of the Vulkan SDK. This large amount of non-standardised boiler-plate can result in limited knowledge transfer, higher chance of unique framework implementation bugs being introduced, etc.

We are currently developing Kompute not to hide the Vulkan SDK interface (as it's incredibly well designed) but to augment it with a direct focus on the Vulkan SDK's GPU computing capabilities. This article provides a high level overview of the motivations of Kompute, together with a set of hands on examples that introduce both GPU computing as well as the core Kompute architecture.

Download Details:
Author: KomputeProject
Source Code: https://github.com/KomputeProject/kompute
License: Apache-2.0 License

#python #tensorflow #machine-learning #cpluplus #deep-learning #linux 

Mary  Turcotte

Mary Turcotte

1657267200

Rollup Plugin Glsl Optimize: Import GLSL Source Files As Strings

rollup-plugin-glsl-optimize 

Import GLSL source files as strings. Pre-processed, validated and optimized with Khronos Group SPIRV-Tools.

Primary use-case is processing WebGL2 / GLSL ES 300 shaders.

import frag from './shaders/myShader.frag';
console.log(frag);

Features

GLSL Optimizer

For WebGL2 / GLSL ES >= 300

With optimize: true (default) shaders will be compiled to SPIR-V (opengl semantics) and optimized for performance using the Khronos SPIR-V Tools Optimizer before being cross-compiled back to GLSL.

Shader Preprocessor

Shaders are preprocessed and validated using the Khronos Glslang Validator.

Macros are run at build time with support for C-style #include directives: *

#version 300 es
#include "postProcessingShared.glsl"
#include "dofCircle.glsl"

void main() {
  outColor = CircleDof(UVAndScreenPos, Color, ColorCoc);
}

* Via the GL_GOOGLE_include_directive extension. But an #extension directive is not required nor recommended in your final inlined code.

Supports glslify

Specify glslify: true to process shader sources with glslify (a node.js-style module system for GLSL).

And install glslify in your devDependencies with npm i -D glslify

Installation

npm i rollup-plugin-glsl-optimize -D

Khronos tools

This plugin uses the Khronos Glslang Validator, Khronos SPIRV-Tools Optimizer and Khronos SPIRV Cross compiler.

Binaries are automatically installed for:

  • Windows 64bit (MSVC 2017)
  • MacOS x86_64 (clang)
  • Ubuntu Trusty / Debian Buster amd64 (clang)

Paths can be manually provided / overridden with the GLSLANG_VALIDATOR, GLSLANG_OPTIMIZER, GLSLANG_CROSS environment variables.

Usage

// rollup.config.mjs
import {default as glslOptimize} from 'rollup-plugin-glsl-optimize';

export default {
    // ...
    plugins: [
        glslOptimize(),
    ]
};

Shader stages

The following shader stages are supported by the Khronos tools and recognized by file extension:

Shader StageFile Extensions
Vertex.vs, .vert, .vs.glsl, .vert.glsl
Fragment.fs, .frag, .fs.glsl, .frag.glsl
Geometry*.geom, .geom.glsl
Compute*.comp, .comp.glsl
Tess Control*.tesc, .tesc.glsl
Tess Eval*.tese, .tese.glsl

* Unsupported in WebGL2

Options

  • include : PathFilter (default table above) File extensions within rollup to include. Though this option can be reconfigured, shader stage detection still operates based on the table above.
  • exclude : PathFilter (default undefined) File extensions within rollup to exclude.
  • includePaths : string[] (default undefined) Additional search paths for #include directive (source file directory is always searched)

Features

  • optimize : boolean (default true) Optimize via SPIR-V as described in the Optimization section [requires WebGL2 / GLSL ES >= 300]. When disabled simply runs the preprocessor [all supported GLSL versions].
  • compress : boolean (default true) Strip all whitespace in the sources

Debugging

  • sourceMap : boolean (default true) Emit source maps. These contain the final preprocessed/optimized GLSL source (but not stripped of whitespace) to aid debugging.
  • emitLineDirectives : boolean (default false) Emit #line NN "original.file" directives for debugging - useful with #include. Note this requires the GL_GOOGLE_cpp_style_line_directive extension so the shader will fail to run in drivers that lack support.

Preprocessor

  • optimizerPreserveUnusedBindings : boolean (default true) Ensure that the optimizer preserves all declared bindings, even when those bindings are unused.
  • preamble : string (default undefined) Prepended to the shader source (after the #version directive, before the preprocessor runs)

glslify

  • glslify : boolean (default false) Process sources using glslify prior to all preprocessing, validation and optimization.
  • glslifyOptions (default undefined) When glslify enabled, pass these additional options to glslify.compile().

Advanced Options

  • optimizerDebugSkipOptimizer : boolean (default false) When optimize enabled, skip the SPIR-V optimizer - compiles to SPIR-V then cross-compiles back to GLSL immediately.
  • suppressLineExtensionDirective : boolean (default false) When emitLineDirectives enabled, suppress the GL_GOOGLE_cpp_style_line_directive directive.
  • extraValidatorParams, extraOptimizerParams, extraCrossParams : string[] (default undefined) Additional parameters for the Khronos Glslang Validator here, the Khronos SPIR-V Optimizer here, and the Khronos SPIR-V Cross compiler here.
  • glslangValidatorPath, glslangOptimizerPath, glslangCrossPath : string (default undefined) Provide / override binary tool paths.\

It's recommended to instead use the environment variables GLSLANG_VALIDATOR, GLSLANG_OPTIMIZER, GLSLANG_CROSS where needed. They always take precedence if set.

Changelog

Available in CHANGES.md.

Caveats & Known Issues

  • This plugin handles glsl and glslify by itself. Use with conflicting plugins (e.g. rollup-plugin-glsl, rollup-plugin-glslify) will cause unpredictable results.
  • Optimizer: lowp precision qualifier - emitted as mediump
    SPIR-V has a single RelaxedPrecision decoration for 16-32bit precision. However most implementations actually treat mediump and lowp equivalently, hence the lack of need for it in SPIR-V.

License

Released under the MIT license.
Strip whitespace function adapted from code by Vincent Wochnik (rollup-plugin-glsl).

Khronos tool binaries (built by the upstream projects) are distributed and installed with this plugin under the terms of the Apache License Version 2.0. See the corresponding LICENSE files installed in the bin folder and the binary releases.


Author: docd27
Source code: https://github.com/docd27/rollup-plugin-glsl-optimize
License: MIT license

#javascript #Rollup 

Standard-version: Automate Versioning and CHANGELOG Generation

Standard Version

standard-version is deprecated. If you're a GitHub user, I recommend release-please as an alternative. I encourage folks to fork this repository and, if a fork gets popular, I will link to it in this README.

A utility for versioning using semver and CHANGELOG generation powered by Conventional Commits.

Having problems? Want to contribute? Join us on the node-tooling community Slack.

How It Works:

  1. Follow the Conventional Commits Specification in your repository.
  2. When you're ready to release, run standard-version.

standard-version will then do the following:

  1. Retrieve the current version of your repository by looking at packageFiles[1], falling back to the last git tag.
  2. bump the version in bumpFiles[1] based on your commits.
  3. Generates a changelog based on your commits (uses conventional-changelog under the hood).
  4. Creates a new commit including your bumpFiles[1] and updated CHANGELOG.
  5. Creates a new tag with the new version number.

bumpFiles, packageFiles and updaters

standard-version uses a few key concepts for handling version bumping in your project.

  • packageFiles – User-defined files where versions can be read from and be "bumped".
    • Examples: package.json, manifest.json
    • In most cases (including the default), packageFiles are a subset of bumpFiles.
  • bumpFiles – User-defined files where versions should be "bumped", but not explicitly read from.
    • Examples: package-lock.json, npm-shrinkwrap.json
  • updaters – Simple modules used for reading packageFiles and writing to bumpFiles.

By default, standard-version assumes you're working in a NodeJS based project... because of this, for the majority of projects you might never need to interact with these options.

That said, if you find your self asking How can I use standard-version for additional metadata files, languages or version files? – these configuration options will help!

Installing standard-version

As a local npm run script

Install and add to devDependencies:

npm i --save-dev standard-version

Add an npm run script to your package.json:

{
  "scripts": {
    "release": "standard-version"
  }
}

Now you can use npm run release in place of npm version.

This has the benefit of making your repo/package more portable, so that other developers can cut releases without having to globally install standard-version on their machine.

As global bin

Install globally (add to your PATH):

npm i -g standard-version

Now you can use standard-version in place of npm version.

This has the benefit of allowing you to use standard-version on any repo/package without adding a dev dependency to each one.

Using npx

As of npm@5.2.0, npx is installed alongside npm. Using npx you can use standard-version without having to keep a package.json file by running: npx standard-version.

This method is especially useful when using standard-version in non-JavaScript projects.

Configuration

You can configure standard-version either by:

  1. Placing a standard-version stanza in your package.json (assuming your project is JavaScript).
  2. Creating a .versionrc, .versionrc.json or .versionrc.js.
  • If you are using a .versionrc.js your default export must be a configuration object, or a function returning a configuration object.

Any of the command line parameters accepted by standard-version can instead be provided via configuration. Please refer to the conventional-changelog-config-spec for details on available configuration options.

Customizing CHANGELOG Generation

By default (as of 6.0.0), standard-version uses the conventionalcommits preset.

This preset:

  • Adheres closely to the conventionalcommits.org specification.
  • Is highly configurable, following the configuration specification maintained here.
    • We've documented these config settings as a recommendation to other tooling makers.

There are a variety of dials and knobs you can turn related to CHANGELOG generation.

As an example, suppose you're using GitLab, rather than GitHub, you might modify the following variables:

  • commitUrlFormat: the URL format of commit SHAs detected in commit messages.
  • compareUrlFormat: the URL format used to compare two tags.
  • issueUrlFormat: the URL format used to link to issues.

Making these URLs match GitLab's format, rather than GitHub's.

CLI Usage

NOTE: To pass nested configurations to the CLI without defining them in the package.json use dot notation as the parameters e.g. --skip.changelog.

First Release

To generate your changelog for your first release, simply do:

# npm run script
npm run release -- --first-release
# global bin
standard-version --first-release
# npx
npx standard-version --first-release

This will tag a release without bumping the version bumpFiles1.

When you are ready, push the git tag and npm publish your first release. \o/

Cutting Releases

If you typically use npm version to cut a new release, do this instead:

# npm run script
npm run release
# or global bin
standard-version

As long as your git commit messages are conventional and accurate, you no longer need to specify the semver type - and you get CHANGELOG generation for free! \o/

After you cut a release, you can push the new git tag and npm publish (or npm publish --tag next) when you're ready.

Release as a Pre-Release

Use the flag --prerelease to generate pre-releases:

Suppose the last version of your code is 1.0.0, and your code to be committed has patched changes. Run:

# npm run script
npm run release -- --prerelease

This will tag your version as: 1.0.1-0.

If you want to name the pre-release, you specify the name via --prerelease <name>.

For example, suppose your pre-release should contain the alpha prefix:

# npm run script
npm run release -- --prerelease alpha

This will tag the version as: 1.0.1-alpha.0

Release as a Target Type Imperatively (npm version-like)

To forgo the automated version bump use --release-as with the argument major, minor or patch.

Suppose the last version of your code is 1.0.0, you've only landed fix: commits, but you would like your next release to be a minor. Simply run the following:

# npm run script
npm run release -- --release-as minor
# Or
npm run release -- --release-as 1.1.0

You will get version 1.1.0 rather than what would be the auto-generated version 1.0.1.

NOTE: you can combine --release-as and --prerelease to generate a release. This is useful when publishing experimental feature(s).

Prevent Git Hooks

If you use git hooks, like pre-commit, to test your code before committing, you can prevent hooks from being verified during the commit step by passing the --no-verify option:

# npm run script
npm run release -- --no-verify
# or global bin
standard-version --no-verify

Signing Commits and Tags

If you have your GPG key set up, add the --sign or -s flag to your standard-version command.

Lifecycle Scripts

standard-version supports lifecycle scripts. These allow you to execute your own supplementary commands during the release. The following hooks are available and execute in the order documented:

  • prerelease: executed before anything happens. If the prerelease script returns a non-zero exit code, versioning will be aborted, but it has no other effect on the process.
  • prebump/postbump: executed before and after the version is bumped. If the prebump script returns a version #, it will be used rather than the version calculated by standard-version.
  • prechangelog/postchangelog: executes before and after the CHANGELOG is generated.
  • precommit/postcommit: called before and after the commit step.
  • pretag/posttag: called before and after the tagging step.

Simply add the following to your package.json to configure lifecycle scripts:

{
  "standard-version": {
    "scripts": {
      "prebump": "echo 9.9.9"
    }
  }
}

As an example to change from using GitHub to track your items to using your projects Jira use a postchangelog script to replace the url fragment containing 'https://github.com/`myproject`/issues/' with a link to your Jira - assuming you have already installed replace

{
  "standard-version": {
    "scripts": {
      "postchangelog": "replace 'https://github.com/myproject/issues/' 'https://myjira/browse/' CHANGELOG.md"
    }
  }
}

Skipping Lifecycle Steps

You can skip any of the lifecycle steps (bump, changelog, commit, tag), by adding the following to your package.json:

{
  "standard-version": {
    "skip": {
      "changelog": true
    }
  }
}

Committing Generated Artifacts in the Release Commit

If you want to commit generated artifacts in the release commit, you can use the --commit-all or -a flag. You will need to stage the artifacts you want to commit, so your release command could look like this:

{
  "standard-version": {
    "scripts": {
      "prerelease": "webpack -p --bail && git add <file(s) to commit>"
    }
  }
}
{
  "scripts": {
    "release": "standard-version -a"
  }
}

Dry Run Mode

running standard-version with the flag --dry-run allows you to see what commands would be run, without committing to git or updating files.

# npm run script
npm run release -- --dry-run
# or global bin
standard-version --dry-run

Prefix Tags

Tags are prefixed with v by default. If you would like to prefix your tags with something else, you can do so with the -t flag.

standard-version -t @scope/package\@

This will prefix your tags to look something like @scope/package@2.0.0

If you do not want to have any tag prefix you can use the -t flag and provide it with an empty string as value.

Note: simply -t or --tag-prefix without any value will fallback to the default 'v'

CLI Help

# npm run script
npm run release -- --help
# or global bin
standard-version --help

Code Usage

const standardVersion = require('standard-version')

// Options are the same as command line, except camelCase
// standardVersion returns a Promise
standardVersion({
  noVerify: true,
  infile: 'docs/CHANGELOG.md',
  silent: true
}).then(() => {
  // standard-version is done
}).catch(err => {
    console.error(`standard-version failed with message: ${err.message}`)
})

TIP: Use the silent option to prevent standard-version from printing to the console.

FAQ

How is standard-version different from semantic-release?

semantic-release is described as:

semantic-release automates the whole package release workflow including: determining the next version number, generating the release notes and publishing the package.

While both are based on the same foundation of structured commit messages, standard-version takes a different approach by handling versioning, changelog generation, and git tagging for you without automatic pushing (to GitHub) or publishing (to an npm registry). Use of standard-version only affects your local git repo - it doesn't affect remote resources at all. After you run standard-version, you can review your release state, correct mistakes and follow the release strategy that makes the most sense for your codebase.

We think they are both fantastic tools, and we encourage folks to use semantic-release instead of standard-version if it makes sense for their use-case.

Should I always squash commits when merging PRs?

The instructions to squash commits when merging pull requests assumes that one PR equals, at most, one feature or fix.

If you have multiple features or fixes landing in a single PR and each commit uses a structured message, then you can do a standard merge when accepting the PR. This will preserve the commit history from your branch after the merge.

Although this will allow each commit to be included as separate entries in your CHANGELOG, the entries will not be able to reference the PR that pulled the changes in because the preserved commit messages do not include the PR number.

For this reason, we recommend keeping the scope of each PR to one general feature or fix. In practice, this allows you to use unstructured commit messages when committing each little change and then squash them into a single commit with a structured message (referencing the PR number) once they have been reviewed and accepted.

Can I use standard-version for additional metadata files, languages or version files?

As of version 7.1.0 you can configure multiple bumpFiles and packageFiles.

  1. Specify a custom bumpFile "filename", this is the path to the file you want to "bump"
  2. Specify the bumpFile "updater", this is how the file will be bumped. a. If you're using a common type, you can use one of standard-version's built-in updaters by specifying a type. b. If your using an less-common version file, you can create your own updater.
// .versionrc
{
  "bumpFiles": [
    {
      "filename": "MY_VERSION_TRACKER.txt",
      // The `plain-text` updater assumes the file contents represents the version.
      "type": "plain-text"
    },
    {
      "filename": "a/deep/package/dot/json/file/package.json",
      // The `json` updater assumes the version is available under a `version` key in the provided JSON document.
      "type": "json"
    },
    {
      "filename": "VERSION_TRACKER.json",
      //  See "Custom `updater`s" for more details.
      "updater": "standard-version-updater.js"
    }
  ]
}

If using .versionrc.js as your configuration file, the updater may also be set as an object, rather than a path:

// .versionrc.js
const tracker = {
  filename: 'VERSION_TRACKER.json',
  updater: require('./path/to/custom-version-updater')
}

module.exports = {
  bumpFiles: [tracker],
  packageFiles: [tracker]
}

Custom updaters

An updater is expected to be a Javascript module with atleast two methods exposed: readVersion and writeVersion.

readVersion(contents = string): string

This method is used to read the version from the provided file contents.

The return value is expected to be a semantic version string.

writeVersion(contents = string, version: string): string

This method is used to write the version to the provided contents.

The return value will be written directly (overwrite) to the provided file.


Let's assume our VERSION_TRACKER.json has the following contents:

{
  "tracker": {
    "package": {
      "version": "1.0.0"
    }
  }
}

An acceptable standard-version-updater.js would be:

// standard-version-updater.js
const stringifyPackage = require('stringify-package')
const detectIndent = require('detect-indent')
const detectNewline = require('detect-newline')

module.exports.readVersion = function (contents) {
  return JSON.parse(contents).tracker.package.version;
}

module.exports.writeVersion = function (contents, version) {
  const json = JSON.parse(contents)
  let indent = detectIndent(contents).indent
  let newline = detectNewline(contents)
  json.tracker.package.version = version
  return stringifyPackage(json, indent, newline)
}

Author: Conventional-changelog
Source Code: https://github.com/conventional-changelog/standard-version 
License: ISC license

#node #javascript #version #git #cli 

Lenora  Hauck

Lenora Hauck

1593583680

How To Install Git on Ubuntu 16.04 LTS

Git is one of the most popular tools used as a distributed version control system (VCS). Git is commonly used for source code management (SCM) and has become more used than old VCS systems like SVN. In this article, we’ll show you how to install Git on your Ubuntu 16.04 dedicated server.

Installing Git on Ubuntu 16.04 LTS

We have also created a convenient video tutorial that outlines how to install Git on a Ubuntu 16.04 Server.

Now, let’s get started on that install…

Preflight Check

  • You should be running a server with any VPS Ubuntu LTS release.
  • You will need to log in to SSH via the root user.

First, as always, we should start out by running general OS and package updates. On Ubuntu we’ll do this by running:

apt-get update

After you have run the general updates on the server you can get started with installing Git.

  1. Install Git

  2. apt-get install git-core

  3. You may be asked to confirm the download and installation; simply enter y to confirm. It’s that simple, Git should be installed and ready to use!

  4. Confirm Git the installation

  5. With the main installation done, first check to ensure the executable file is set up and accessible. The best way to do this is simply to run Git with the version command.

  6. git --version

  7. git version 2.7.4

  8. Configure Git’s settings (for the root user)

  9. It’s a good idea to setup your user for git now, to prevent any commit errors later. We’ll setup the user testuser with the e-mail address testuser@example.com.

  10. git config --global user.name "testuser" git config --global user.email "testuser@example.com"

Note:

It’s important to know that git configs work on a user by user basis. For example, if you have a ‘david’ Linux user and they will be working with git then David should run the same commands from his user account. By doing this the commits made by the ‘david’ Linux user will be done under his details in Git.

#distributed version control #git #linux #scm #source code management #tools #ubuntu #ubuntu 16.04 #vcs #version control #version control system