Nigel  Uys

Nigel Uys


10 Favorite Libraries for Standard CLI in Go

In today's post we will learn about 10 Favorite Libraries for Standard CLI in Go. 

What is Command Line Interface (CLI)?

CLI is a command line program that accepts text input to execute operating system functions.

In the 1960s, using only computer terminals, this was the only way to interact with computers.

In the 1970s an 1980s, command line input was commonly used by Unix systems and PC systems like MS-DOS and Apple DOS.

Today, with graphical user interfaces (GUI), most users never use command-line interfaces (CLI).

However, CLI is still used by software developers and system administrators to configure computers, install software, and access features that are not available in the graphical interface.

Table of contents:

  • Acmd - Simple, useful and opinionated CLI package in Go.
  • Argparse - Command line argument parser inspired by Python's argparse module.
  • Argv - Go library to split command line string as arguments array using the bash syntax.
  • Carapace - Command argument completion generator for spf13/cobra.
  • Carapace-bin - Multi-shell multi-command argument completer.
  • Carapace-spec - Define simple completions using a spec file.
  • CLI - Feature-rich and easy to use command-line package based on golang struct tags.
  • CLI - Simple and complete API for building command line interfaces in Go.
  • Climax - Alternative CLI with "human face", in spirit of Go command.
  • Clîr - A Simple and Clear CLI library. Dependency free.

1 - Acmd: Simple, useful and opinionated CLI package in Go.

Popular CLI libraries (or better frameworks) have too large and unclear API, in most cases, you just want to define commands for your CLI application and run them without additional work. This package does this by providing a small API, good defaults and clear code.


Go version 1.17+

go get


cmds := []acmd.Command{
		Name:        "now",
		Description: "prints current time",
		ExecFunc: func(ctx context.Context, args []string) error {
			fmt.Printf("now: %s\n", now.Format("15:04:05"))
			return nil
		Name:        "status",
		Description: "prints status of the system",
		ExecFunc: func(ctx context.Context, args []string) error {
			// do something with ctx :)
			return nil

// all the acmd.Config fields are optional
r := acmd.RunnerOf(cmds, acmd.Config{
	AppName:        "acmd-example",
	AppDescription: "Example of acmd package",
	Version:        "the best v0.x.y",
	// Context - if nil `signal.Notify` will be used
	// Args - if nil `os.Args[1:]` will be used
	// Usage - if nil default print will be used

if err := r.Run(); err != nil {

Also see examples: examples_test.go.

View on Github

2 - Argparse: Command line argument parser inspired by Python's argparse module.

Let's be honest -- Go's standard command line arguments parser flag terribly sucks. It cannot come anywhere close to the Python's argparse module. This is why this project exists.

The goal of this project is to bring ease of use and flexibility of argparse to Go. Which is where the name of this package comes from.


To install and start using argparse simply do:

$ go get -u -v

You are good to go to write your first command line tool! See Usage and Examples sections for information how you can use it


To start using argparse in Go see above instructions on how to install. From here on you can start writing your first program. Please check out examples from examples/ directory to see how to use it in various ways.

Here is basic example of print command (from examples/print/ directory):

package main

import (

func main() {
	// Create new parser object
	parser := argparse.NewParser("print", "Prints provided string to stdout")
	// Create string flag
	s := parser.String("s", "string", &argparse.Options{Required: true, Help: "String to print"})
	// Parse input
	err := parser.Parse(os.Args)
	if err != nil {
		// In case of error print error and print usage
		// This can also be done by passing -h or --help flags
	// Finally print the collected string

Basic options

Create your parser instance and pass it program name and program description. Program name if empty will be taken from os.Args[0] (which is okay in most cases). Description can be as long as you wish and will be used in --help output

parser := argparse.NewParser("progname", "Description of my awesome program. It can be as long as I wish it to be")

String will allow you to get a string from arguments, such as $ progname --string "String content"

var myString *string = parser.String("s", "string", ...)

View on Github

3 - Argv: Go library to split command line string as arguments array using the bash syntax.

Argv is a library for Go to split command line string into arguments array.


func TestArgv(t *testing.T) {
    args, err := Argv(" ls   `echo /`   |  wc  -l ", func(backquoted string) (string, error) {
		return backquoted, nil
	}, nil)
	if err != nil {
	expects := [][]string{
		[]string{"ls", "echo /"},
		[]string{"wc", "-l"},
	if !reflect.DeepEqual(args, expects) {

View on Github

4 - Carapace: Command argument completion generator for spf13/cobra.

Command argument completion generator for cobra. You can read more about it here: A pragmatic approach to shell completion.


Calling carapace.Gen on the root command is sufficient to enable completion script generation using the hidden command.

import (


Standalone Mode

Carapace can also be used to provide completion for arbitrary commands as well (similar to aws_completer). See rsteube/carapace-bin for examples. There is also a binary to parse flags from gnu help pages at caraparse.


An example implementation can be found in the example folder.

cd example
go build .

# bash
source <(example _carapace bash)

# elvish
paths=[$@paths (pwd)]
eval (example _carapace elvish | slurp)

# fish
set PATH $PATH (pwd) 
example _carapace fish | source

# nushell
example _carapace nushell

# oil
source <(example _carapace oil)

# powershell
Set-PSReadLineOption -Colors @{ "Selection" = "`e[7m" }
Set-PSReadlineKeyHandler -Key Tab -Function MenuComplete
$env:PATH += ":$pwd"
example _carapace powershell | out-string | Invoke-Expression

# tcsh
set autolist
eval `example _carapace tcsh`

# xonsh
$COMPLETION_QUERY_LIMIT = 500 # increase limit
exec($(example _carapace xonsh))

# zsh
source <(example _carapace zsh)

example <TAB>

or use docker-compose:

docker-compose pull
docker-compose run --rm build
docker-compose run --rm [bash|elvish|fish|ion|nushell|oil|powershell|tcsh|xonsh|zsh]

example <TAB>

View on Github

5 - Carapace-bin: Multi-shell multi-command argument completer.

Carapace-bin provides argument completions for many CLI commands: see the full list here, and works across many POSIX and non-POSIX shells. This multi-shell multi-command argument completer is based on rsteube/carapace. You can read more about this tool here: A pragmatic approach to shell completion.


A major part of the completers has been generated from help pages so there will be some quirks here and there. Also completion depends on what rsteube/carapace is capable of so far.

Getting Started

Ensure carapace is added to PATH (Installation). Then register the completers (Setup):

# bash (~/.bashrc)
source <(carapace _carapace)

# elvish (~/.elvish/rc.elv)
eval (carapace _carapace|slurp)

# fish (~/.config/fish/
mkdir -p ~/.config/fish/completions
carapace --list | awk '{print $1}' | xargs -I{} touch ~/.config/fish/completions/{}.fish # disable auto-loaded completions (#185)
carapace _carapace | source

# nushell (~/.config/nushell/
carapace _carapace nushell # update manually according to output

# oil (~/.config/oil/oshrc)
source <(carapace _carapace)

# powershell (~/.config/powershell/Microsoft.PowerShell_profile.ps1)
Set-PSReadLineOption -Colors @{ "Selection" = "`e[7m" }
Set-PSReadlineKeyHandler -Key Tab -Function MenuComplete
carapace _carapace | Out-String | Invoke-Expression

# tcsh (~/.tcshrc)
set autolist
eval `carapace _carapace`

# xonsh (~/.config/xonsh/rc.xsh)
exec($(carapace _carapace))

# zsh (~/.zshrc)
source <(carapace _carapace)

View on Github

6 - Carapace-spec: Define simple completions using a spec file.

Define simple completions using a spec file (based on carapace).

The carapace-spec binary can be used to complete spec files, but carapace-bin is recommended as it supports a range of custom macros.

name: mycmd
description: my command
  --optarg?: optarg flag
  -r, --repeatable*: repeatable flag
  -v=: flag with value
  --help: bool flag
    optarg: ["one", "two\twith description", "three\twith style\tblue"]
    v: ["$files"]
- name: sub
  description: subcommand
      - ["$list(,)", "1", "2", "3"]
      - ["$directories"]

View on Github

7 - CLI: Feature-rich and easy to use command-line package based on golang struct tags.

Key features

  • Lightweight and easy to use.
  • Defines flag by tag, e.g. flag name(short or/and long), description, default value, password, prompt and so on.
  • Type safety.
  • Output looks very nice.
  • Supports custom Validator.
  • Supports slice and map as a flag.
  • Supports any type as a flag field which implements cli.Decoder interface.
  • Supports any type as a flag field which uses FlagParser.
  • Suggestions for command.(e.g. hl => help, "veron" => "version").
  • Supports default value for flag, even expression about env variable(e.g. dft:"$HOME/dev").
  • Supports editor like git commit command.(See example 21 and 22)

Example 1: Hello

back to examples

// main.go
// This is a HelloWorld-like example

package main

import (


type argT struct {
	Name string `cli:"name" usage:"tell me your name"`

func main() {
	os.Exit(cli.Run(new(argT), func(ctx *cli.Context) error {
		argv := ctx.Argv().(*argT)
		ctx.String("Hello, %s!\n", argv.Name)
		return nil
$ go build -o hello
$ ./hello --name Clipher
Hello, Clipher!

Example 2: Flag

back to examples

// main.go
// This example show basic usage of flag

package main

import (


type argT struct {
	Port int  `cli:"p,port" usage:"short and long format flags both are supported"`
	X    bool `cli:"x" usage:"boolean type"`
	Y    bool `cli:"y" usage:"boolean type, too"`

func main() {
	os.Exit(cli.Run(new(argT), func(ctx *cli.Context) error {
		argv := ctx.Argv().(*argT)
		ctx.String("port=%d, x=%v, y=%v\n", argv.Port, argv.X, argv.Y)
		return nil
$ go build -o app
$ ./app -h

  -h, --help     display help information
  -p, --port     short and long format flags both are supported
  -x             boolean type
  -y             boolean type, too
$ ./app -p=8080 -x
port=8080, x=true, y=false
$ ./app -p 8080 -x=true
port=8080, x=true, y=false
$ ./app -p8080 -y true
port=8080, x=false, y=true
$ ./app --port=8080 -xy
port=8080, x=true, y=true
$ ./app --port 8080 -yx
port=8080, x=true, y=true

View on Github

8 - CLI: Simple and complete API for building command line interfaces in Go.

Module cli provides a simple, fast and complete API for building command line applications in Go. In contrast to other libraries the emphasis is put on the definition and validation of positional arguments, handling of options from all levels in a single block as well as a minimalistic set of dependencies.

The core of the module is the command, option and argument parsing logic. After a successful parsing the command action is evaluated passing a slice of (validated) positional arguments and a map of (validated) options. No more no less.


co := cli.NewCommand("checkout", "checkout a branch or revision").
  WithArg(cli.NewArg("revision", "branch or revision to checkout")).
  WithOption(cli.NewOption("branch", "Create branch if missing").WithChar('b').WithType(cli.TypeBool)).
  WithOption(cli.NewOption("upstream", "Set upstream for the branch").WithChar('u').WithType(cli.TypeBool)).
  WithAction(func(args []string, options map[string]string) int {
    // do something
    return 0

add := cli.NewCommand("add", "add a remote").
  WithArg(cli.NewArg("remote", "remote to add"))

rmt := cli.NewCommand("remote", "Work with git remotes").

app := cli.New("git tool").
  WithOption(cli.NewOption("verbose", "Verbose execution").WithChar('v').WithType(cli.TypeBool)).
  // no action attached, just print usage when executed

os.Exit(app.Run(os.Args, os.Stdout))

View on Github

9 - Climax: Alternative CLI with "human face", in spirit of Go command.

Climax is a handy alternative CLI (command-line interface) for Go apps. It looks pretty much exactly like the output of the default go command and incorporates some fancy features from it. For instance, Climax does support so-called topics (some sort of Wiki entries for CLI). You can define some annotated use cases of some command that would get displayed in the help section of corresponding command also.

Why creating another CLI?

I didn't like existing solutions (e.g. codegangsta/cli | spf13/cobra) either for bloated codebase (I dislike the huge complex libraries) or poor output style / API. This project is just an another view on the subject, it has slightly different API than, let's say, Cobra; I find it much more convenient.

A sample application output, Climax produces:

Camus is a modern content writing suite.


	camus command [arguments]

The commands are:

	init        starts a new project
	new         creates flavored book parts

Use "camus help [command]" for more information about a command.

Additional help topics:

	writing     markdown language cheatsheet
	metadata    intro to yaml-based metadata
	realtime    effective real-time writing

Use "camus help [topic]" for more information about a topic.

View on Github

10 - Clîr: A Simple and Clear CLI library. Dependency free.


  • Nested Subcommands
  • Uses the standard library flag package
  • Auto-generated help
  • Custom banners
  • Hidden Subcommands
  • Default Subcommand
  • Dependency free


package main

import (


func main() {

	// Create new cli
	cli := clir.NewCli("Flags", "A simple example", "v0.0.1")

	// Name
	name := "Anonymous"
	cli.StringFlag("name", "Your name", &name)

	// Define action for the command
	cli.Action(func() error {
		fmt.Printf("Hello %s!\n", name)
		return nil

	if err := cli.Run(); err != nil {
		fmt.Printf("Error encountered: %v\n", err)


Generated Help

$ flags --help
Flags v0.0.1 - A simple example


        Get help on the 'flags' command.
  -name string
        Your name

View on Github

Thank you for following this article.

Related videos:

Build Your First Command Line Tool in Go

#go #golang #cli 

10 Favorite Libraries for Standard CLI in Go
Hunter  Krajcik

Hunter Krajcik


Build_cli: A Builder That Generates an ArgsParser From A Class

Parse command line arguments directly into an annotation class using the Dart Build System.


Annotate a class with @CliOptions() from package:build_cli_annotations.

import 'package:build_cli_annotations/build_cli_annotations.dart';

part 'example.g.dart';

class Options {
  @CliOption(abbr: 'n', help: 'Required. The name to use in the greeting.')
  final String name;

  final bool nameWasParsed;

  late bool yell;

  @CliOption(defaultsTo: Language.en, abbr: 'l')
  late Language displayLanguage;

  @CliOption(negatable: false, help: 'Prints usage information.')
  late bool help;

  Options(, {this.nameWasParsed = false});

enum Language { en, es }

Configure and run the Dart Build System and a set of helpers is created to parse the corresponding command line arguments and populate your class.

void main(List<String> args) {
  var options = parseOptions(args);
  if (!options.nameWasParsed) {
    throw new ArgumentError('You must set `name`.');


Add three packages to pubspec.yaml:

  build_cli_annotations: ^1.0.0

  build_cli: ^1.0.0
  build_runner: ^1.0.0
  • build_cli_annotations is a separate package containing the annotations you add to classes and members to tell build_cli what to do.
    • If the code you're annotating is in a published directory – lib, bin – put it in the dependencies section.
  • build_cli contains the logic to generate the code.
    • It should almost always be put in dev_dependencies.
  • build_runner contains the logic to run a build and generate code.
    • It should almost always be put in dev_dependencies.


Uses package:args under the covers.

More examples:

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add build_cli

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  build_cli: ^2.2.0

Alternatively, your editor might support dart pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:build_cli/build_cli.dart';


import 'dart:io';

import 'package:build_cli_annotations/build_cli_annotations.dart';

part 'example.g.dart';

/// Annotation your option class with [CliOptions].
class Options {
  /// Customize options and flags by annotating fields with [CliOption].
  @CliOption(abbr: 'n', help: 'Required. The name to use in the greeting.')
  final String name;

  /// Name a field `[name]WasParsed` without a [CliOption] annotation and it
  /// will be populated with `ArgResult.wasParsed('name')`.
  final bool nameWasParsed;

  /// [bool] fields are turned into flags.
  /// Fields without the [CliOption] annotation are picked up with simple
  /// defaults.
  late bool yell;

  /// Field names are also "kebab cased" automatically.
  /// This becomes `--display-language`.
  @CliOption(defaultsTo: Language.en, abbr: 'l')
  late Language displayLanguage;

  @CliOption(negatable: false, help: 'Prints usage information.')
  late bool help;

  /// Populates final and non-null fields as long as there are matching
  /// constructor parameters.
  Options(, {this.nameWasParsed = false});

/// Enums are a great way to specify options with a fixed set of allowed
/// values.
enum Language { en, es }

void main(List<String> args) {
  Options options;
  try {
    options = parseOptions(args);
    if (!options.nameWasParsed) {
      throw const FormatException('You must provide a name.');
  } on FormatException catch (e) {
    exitCode = 64;

  if ( {

  final buffer = StringBuffer();

  switch (options.displayLanguage) {
    case Language.en:
      buffer.write('Hello, ');
      buffer.write('¡Hola, ');


  if (options.yell) {
  } else {

void _printUsage() {
  print('Usage: example/example.dart [arguments]');

Download Details:

Author: kevmoo
Source Code: 
License: MIT license

#flutter #dart #cli 

Build_cli: A Builder That Generates an ArgsParser From A Class
Reid  Rohan

Reid Rohan


Pkg: Package Your Node.js Project into an Executable

Disclaimer: pkg was created for use within containers and is not intended for use in serverless environments. For those using Vercel, this means that there is no requirement to use pkg in your projects as the benefits it provides are not applicable to the platform.

This command line interface enables you to package your Node.js project into an executable that can be run even on devices without Node.js installed.

Use Cases

  • Make a commercial version of your application without sources
  • Make a demo/evaluation/trial version of your app without sources
  • Instantly make executables for other platforms (cross-compilation)
  • Make some kind of self-extracting archive or installer
  • No need to install Node.js and npm to run the packaged application
  • No need to download hundreds of files via npm install to deploy your application. Deploy it as a single file
  • Put your assets inside the executable to make it even more portable
  • Test your app against new Node.js version without installing it


npm install -g pkg

After installing it, run pkg --help without arguments to see list of options:

pkg [options] <input>


    -h, --help           output usage information
    -v, --version        output pkg version
    -t, --targets        comma-separated list of targets (see examples)
    -c, --config         package.json or any json file with top-level config
    --options            bake v8 options into executable to run with them on
    -o, --output         output file name or template for several files
    --out-path           path to save output one or more executables
    -d, --debug          show more information during packaging process [off]
    -b, --build          don't download prebuilt base binaries, build them
    --public             speed up and disclose the sources of top-level project
    --public-packages    force specified packages to be considered public
    --no-bytecode        skip bytecode generation and include source files as plain js
    --no-native-build    skip native addons build
    --no-dict            comma-separated list of packages names to ignore dictionaries. Use --no-dict * to disable all dictionaries
    -C, --compress       [default=None] compression algorithm = Brotli or GZip


  – Makes executables for Linux, macOS and Windows
    $ pkg index.js
  – Takes package.json from cwd and follows 'bin' entry
    $ pkg .
  – Makes executable for particular target machine
    $ pkg -t node14-win-arm64 index.js
  – Makes executables for target machines of your choice
    $ pkg -t node12-linux,node14-linux,node14-win index.js
  – Bakes '--expose-gc' and '--max-heap-size=34' into executable
    $ pkg --options "expose-gc,max-heap-size=34" index.js
  – Consider packageA and packageB to be public
    $ pkg --public-packages "packageA,packageB" index.js
  – Consider all packages to be public
    $ pkg --public-packages "*" index.js
  – Bakes '--expose-gc' into executable
    $ pkg --options expose-gc index.js
  – reduce size of the data packed inside the executable with GZip
    $ pkg --compress GZip index.js

The entrypoint of your project is a mandatory CLI argument. It may be:

  • Path to entry file. Suppose it is /path/app.js, then packaged app will work the same way as node /path/app.js
  • Path to package.json. Pkg will follow bin property of the specified package.json and use it as entry file.
  • Path to directory. Pkg will look for package.json in the specified directory. See above.


pkg can generate executables for several target machines at a time. You can specify a comma-separated list of targets via --targets option. A canonical target consists of 3 elements, separated by dashes, for example node12-macos-x64 or node14-linux-arm64:

  • nodeRange (node8), node10, node12, node14, node16 or latest
  • platform alpine, linux, linuxstatic, win, macos, (freebsd)
  • arch x64, arm64, (armv6, armv7)

(element) is unsupported, but you may try to compile yourself.

You may omit any element (and specify just node14 for example). The omitted elements will be taken from current platform or system-wide Node.js installation (its version and arch). There is also an alias host, that means that all 3 elements are taken from current platform/Node.js. By default targets are linux,macos,win for current Node.js version and arch.

If you want to generate executable for different architectures, note that by default pkg has to run the executable of the target arch to generate bytecodes:

  • Linux: configure binfmt with QEMU.
  • macOS: possible to build x64 on arm64 with Rosetta 2 but not opposite.
  • Windows: possible to build x64 on arm64 with x64 emulation but not opposite.
  • or, disable bytecode generation with --no-bytecode --public-packages "*" --public.

macos-arm64 is experimental. Be careful about the mandatory code signing requirement. The final executable has to be signed (ad-hoc signature is sufficient) with codesign utility of macOS (or ldid utility on Linux). Otherwise, the executable will be killed by kernel and the end-user has no way to permit it to run at all. pkg tries to ad-hoc sign the final executable. If necessary, you can replace this signature with your own trusted Apple Developer ID.

To be able to generate executables for all supported architectures and platforms, run pkg on a Linux host with binfmt (QEMU emulation) configured and ldid installed.


During packaging process pkg parses your sources, detects calls to require, traverses the dependencies of your project and includes them into executable. In most cases you don't need to specify anything manually.

However your code may have require(variable) calls (so called non-literal argument to require) or use non-javascript files (for example views, css, images etc).

require('./build/' + cmd + '.js');
path.join(__dirname, 'views/' + viewName);

Such cases are not handled by pkg. So you must specify the files - scripts and assets - manually in pkg property of your package.json file.

  "pkg": {
    "scripts": "build/**/*.js",
    "assets": "views/**/*",
    "targets": [ "node14-linux-arm64" ],
    "outputPath": "dist"

The above example will include everything in assets/ and every .js file in build/, build only for node14-linux-arm64, and place the executable inside dist/.

You may also specify arrays of globs:

    "assets": [ "assets/**/*", "images/**/*" ]

Just be sure to call pkg package.json or pkg . to make use of package.json configuration.


scripts is a glob or list of globs. Files specified as scripts will be compiled using v8::ScriptCompiler and placed into executable without sources. They must conform to the JS standards of those Node.js versions you target (see Targets), i.e. be already transpiled.


assets is a glob or list of globs. Files specified as assets will be packaged into executable as raw content without modifications. Javascript files may also be specified as assets. Their sources will not be stripped as it improves execution performance of the files and simplifies debugging.

See also Detecting assets in source code and Snapshot filesystem.


Node.js application can be called with runtime options (belonging to Node.js or V8). To list them type node --help or node --v8-options.

You can "bake" these runtime options into packaged application. The app will always run with the options turned on. Just remove -- from option name.

You can specify multiple options by joining them in a single string, comma (,) separated:

pkg app.js --options expose-gc
pkg app.js --options max_old_space_size=4096
pkg app.js --options max-old-space-size=1024,tls-min-v1.0,expose-gc


You may specify --output if you create only one executable or --out-path to place executables for multiple targets.


Pass --debug to pkg to get a log of packaging process. If you have issues with some particular file (seems not packaged into executable), it may be useful to look through the log.

Bytecode (reproducibility)

By default, your source code is precompiled to v8 bytecode before being written to the output file. To disable this feature, pass --no-bytecode to pkg.

Why would you want to do this?

If you need a reproducible build process where your executable hashes (e.g. md5, sha1, sha256, etc.) are the same value between builds. Because compiling bytecode is not deterministic (see here or here) it results in executables with differing hashed values. Disabling bytecode compilation allows a given input to always have the same output.

Why would you NOT want to do this?

While compiling to bytecode does not make your source code 100% secure, it does add a small layer of security/privacy/obscurity to your source code. Turning off bytecode compilation causes the raw source code to be written directly to the executable file. If you're on *nix machine and would like an example, run pkg with the --no-bytecode flag, and use the GNU strings tool on the output. You then should be able to grep your source code.

Other considerations

Specifying --no-bytecode will fail if there are any packages in your project that aren't explicitly marked as public by the license in their package.json. By default, pkg will check the license of each package and make sure that stuff that isn't meant for the public will only be included as bytecode.

If you do require building pkg binaries for other architectures and/or depend on a package with a broken license in its package.json, you can override this behaviour by either explicitly whitelisting packages to be public using --public-packages "packageA,packageB" or setting all packages to public using --public-packages "*"


pkg has so called "base binaries" - they are actually same node executables but with some patches applied. They are used as a base for every executable pkg creates. pkg downloads precompiled base binaries before packaging your application. If you prefer to compile base binaries from source instead of downloading them, you may pass --build option to pkg. First ensure your computer meets the requirements to compile original Node.js:

See pkg-fetch for more info.


Pass --compress Brotli or --compress GZip to pkg to compress further the content of the files store in the exectable.

This option can reduce the size of the embedded file system by up to 60%.

The startup time of the application might be reduced slightly.

-C can be used as a shortcut for --compress .


PKG_CACHE_PATHUsed to specify a custom path for node binaries cache folder. Default is ~/.pkg-cache
PKG_IGNORE_TAGAllows to ignore additional folder created on PKG_CACHE_PATH matching pkg-fetch version
MAKE_JOB_COUNTAllow configuring number of processes used for compiling


# 1 - Using export
export PKG_CACHE_PATH=/my/cache
pkg app.js

# 2 - Passing it before the script
PKG_CACHE_PATH=/my/cache pkg app.js

Usage of packaged app

Command line call to packaged app ./app a b is equivalent to node app.js a b

Snapshot filesystem

During packaging process pkg collects project files and places them into executable. It is called a snapshot. At run time the packaged application has access to snapshot filesystem where all that files reside.

Packaged files have /snapshot/ prefix in their paths (or C:\snapshot\ in Windows). If you used pkg /path/app.js command line, then __filename value will be likely /snapshot/path/app.js at run time. __dirname will be /snapshot/path as well. Here is the comparison table of path-related values:

valuewith nodepackagedcomments
process.cwd()/project/deploysuppose the app is called ...
process.execPath/usr/bin/nodejs/deploy/app-x64app-x64 and run in /deploy

Hence, in order to make use of a file collected at packaging time (require a javascript file or serve an asset) you should take __filename, __dirname, process.pkg.defaultEntrypoint or require.main.filename as a base for your path calculations. For javascript files you can just require or require.resolve because they use current __dirname by default. For assets use path.join(__dirname, '../path/to/asset'). Learn more about path.join in Detecting assets in source code.

On the other hand, in order to access real file system at run time (pick up a user's external javascript plugin, json configuration or even get a list of user's directory) you should take process.cwd() or path.dirname(process.execPath).

Detecting assets in source code

When pkg encounters path.join(__dirname, '../path/to/asset'), it automatically packages the file specified as an asset. See Assets. Pay attention that path.join must have two arguments and the last one must be a string literal.

This way you may even avoid creating pkg config for your project.

Native addons

Native addons (.node files) use is supported. When pkg encounters a .node file in a require call, it will package this like an asset. In some cases (like with the bindings package), the module path is generated dynamicaly and pkg won't be able to detect it. In this case, you should add the .node file directly in the assets field in package.json.

The way Node.js requires native addon is different from a classic JS file. It needs to have a file on disk to load it, but pkg only generates one file. To circumvent this, pkg will create a temporary file on the disk. These files will stay on the disk after the process has exited and will be used again on the next process launch.

When a package, that contains a native module, is being installed, the native module is compiled against current system-wide Node.js version. Then, when you compile your project with pkg, pay attention to --target option. You should specify the same Node.js version as your system-wide Node.js to make compiled executable compatible with .node files.

Note that fully static Node binaries are not capable of loading native bindings, so you may not use Node bindings with linuxstatic.


const { exec } = require('pkg')

exec(args) takes an array of command line arguments and returns a promise. For example:

await exec(['app.js', '--target', 'host', '--output', 'app.exe']);
// do something with app.exe, run, test, upload, deploy, etc


Error: ENOENT: no such file or directory, uv_chdir

This error can be caused by deleting the directory the application is run from. Or, generally, deleting process.cwd() directory when the application is running.


This error can be caused by using NODE_OPTIONS variable to force to run node with the debug mode enabled. Debugging options are disallowed , as pkg executables are usually used for production environments. If you do need to use inspector, you can build a debuggable Node.js yourself.

Error: require(...).internalModuleStat is not a function

This error can be caused by using NODE_OPTIONS variable with some bootstrap or node options causing conflicts with pkg. Some IDEs, such as VS Code, may add this env variable automatically.

You could check on Unix systems (Linux/macOS) in bash:

$ printenv | grep NODE


exploring virtual file system embedded in debug mode

When you are using the --debug flag when building your executable, pkg add the ability to display the content of the virtual file system and the symlink table on the console, when the application starts, providing that the environement variable DEBUG_PKG is set. This feature can be useful to inspect if symlinks are correctly handled, and check that all the required files for your application are properly incorporated to the final executable.

$ pkg --debug app.js -o output
$ DEBUG_PKG=1 output


C:\> pkg --debug app.js -o output.exe
C:\> set DEBUG_PKG=1
C:\> output.exe

Note: make sure not to use --debug flag in production.

Download Details:

Author: Vercel
Source Code: 
License: MIT license

#javascript #nodejs #cli 

Pkg: Package Your Node.js Project into an Executable
Mike  Kozey

Mike Kozey


Open_mustang_cli: A Package Should Be installed As Global Binary

Mustang CLI


  • Run the following command to install or update the cli
  dart pub global activate open_mustang_cli



  omcli # prints help

Create the screen and model files

  # use routes/booking to create screen files inside sub-directory routes
  omcli -s booking

Create a model file

  omcli -m vehicle

Generate framework source files

  # Run this inside the root directory of a Flutter project
  # -w enables watch mode. Use -d for one time generation
  omcli -w 

Clean generated framework source files

  # Run this inside the root directory of a Flutter project
  omcli -d 

Config file (Advanced)

Source templates that this tool generates can be customized using config file.

  • Create file name mustang.yaml in the root of the project directory
  • Config file format
  serializer: package:mypackage/mypackage_exports.dart
      - package:my_widgets/widgets.dart
    progress_widget: MyProgressIndicatorScreen()
    error_widget: MyErrorScreen()

Use this package as an executable

Install it

You can install the package from the command line:

dart pub global activate open_mustang_cli

Use it

The package has the following executables:

$ omcli

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add open_mustang_cli

With Flutter:

 $ flutter pub add open_mustang_cli

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  open_mustang_cli: ^1.0.17

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:open_mustang_cli/open_mustang_cli.dart';


void main() {

Download Details:

Author: Getwrench
Source Code: 
License: BSD-3-Clause license

#flutter #dart #cli 

Open_mustang_cli: A Package Should Be installed As Global Binary

CZ-cli: The Commitizen Command Line Utility

Commitizen for contributors

When you commit with Commitizen, you'll be prompted to fill out any required commit fields at commit time. No more waiting until later for a git commit hook to run and reject your commit (though that can still be helpful). No more digging through to find what the preferred format is. Get instant feedback on your commit message formatting and be prompted for required fields.      

Installing the command line tool

Commitizen is currently tested against Node.js 12, 14, & 16, although it may work in older versions of Node.js. You should also have npm 6 or greater.

Installation is as simple as running the following command (if you see EACCES error, reading fixing npm permissions may help):

npm install -g commitizen

Using the command line tool

If your repo is Commitizen friendly:

Simply use git cz or just cz instead of git commit when committing. You can also use git-cz, which is an alias for cz.

Alternatively, if you are using npm 5.2+ you can use npx instead of installing globally:

npx cz

or as an npm script:

  "scripts": {
    "commit": "cz"

When you're working in a Commitizen-friendly repository, you'll be prompted to fill in any required fields, and your commit messages will be formatted according to the standards defined by project maintainers.

Add and commit with Commitizen

If your repo is NOT Commitizen friendly:

If you're not working in a Commitizen-friendly repository, then git cz will work just the same as git commit, but npx cz will use the streamich/git-cz adapter. To fix this, you need to first make your repo Commitizen friendly

Making your repo Commitizen friendly

For this example, we'll be setting up our repo to use AngularJS's commit message convention, also known as conventional-changelog.

First, install the Commitizen CLI tools:

npm install commitizen -g

Next, initialize your project to use the cz-conventional-changelog adapter by typing:

commitizen init cz-conventional-changelog --save-dev --save-exact

Or if you are using Yarn:

commitizen init cz-conventional-changelog --yarn --dev --exact

Note that if you want to force install over the top of an old adapter, you can apply the --force argument. For more information on this, just run commitizen help.

The above command does three things for you:

  1. Installs the cz-conventional-changelog adapter npm module
  2. Saves it to package.json's dependencies or devDependencies
  3. Adds the config.commitizen key to the root of your package.json file as shown here:
  "config": {
    "commitizen": {
      "path": "cz-conventional-changelog"

Alternatively, Commitizen configs may be added to a .czrc file:

  "path": "cz-conventional-changelog"

This just tells Commitizen which adapter we actually want our contributors to use when they try to commit to this repo.

commitizen.path is resolved via require.resolve and supports:

  • npm modules
  • directories relative to process.cwd() containing an index.js file
  • file base names relative to process.cwd() with .js extension
  • full relative file names
  • absolute paths

Please note that in the previous version of Commitizen we used czConfig. czConfig has been deprecated, and you should migrate to the new format before Commitizen 3.0.0.

Optional: Install and run Commitizen locally

Installing and running Commitizen locally allows you to make sure that developers are running the exact same version of Commitizen on every machine.

Install Commitizen with npm install --save-dev commitizen.

On npm 5.2+ you can use npx to initialize the conventional changelog adapter:

npx commitizen init cz-conventional-changelog --save-dev --save-exact

For previous versions of npm (< 5.2) you can execute ./node_modules/.bin/commitizen or ./node_modules/.bin/cz in order to actually use the commands.

You can then initialize the conventional changelog adapter using: ./node_modules/.bin/commitizen init cz-conventional-changelog --save-dev --save-exact

And you can then add some nice npm scripts in your package.json file pointing to the local version of Commitizen:

  "scripts": {
    "commit": "cz"

This will be more convenient for your users because then if they want to do a commit, all they need to do is run npm run commit and they will get the prompts needed to start a commit!

NOTE: If you are using precommit hooks thanks to something like husky, you will need to name your script something other than "commit" (e.g. "cm": "cz"). The reason is because npm scripts has a "feature" where it automatically runs scripts with the name prexxx where xxx is the name of another script. In essence, npm and husky will run "precommit" scripts twice if you name the script "commit", and the workaround is to prevent the npm-triggered precommit script.

Optional: Running Commitizen on git commit

This example shows how to incorporate Commitizen into the existing git commit workflow by using git hooks and the --hook command-line option. This is useful for project maintainers who wish to ensure the proper commit format is enforced on contributions from those unfamiliar with Commitizen.

Once either of these methods is implemented, users running git commit will be presented with an interactive Commitizen session that helps them write useful commit messages.

NOTE: This example assumes that the project has been set up to use Commitizen locally.

Traditional git hooks

Update .git/hooks/prepare-commit-msg with the following code:

exec < /dev/tty && node_modules/.bin/cz --hook || true


For husky users, add the following configuration to the project's package.json file:

"husky": {
  "hooks": {
    "prepare-commit-msg": "exec < /dev/tty && npx cz --hook || true"

Why exec < /dev/tty? By default, git hooks are not interactive. This command allows the user to use their terminal to interact with Commitizen during the hook.

Congratulations! Your repo is Commitizen friendly. Time to flaunt it!

Add the "Commitizen friendly" badge to your README using the following markdown:

[![Commitizen friendly](](

Your badge will look like this:

Commitizen friendly

It may also make sense to change your or files to include or link to the Commitizen project so that your new contributors may learn more about installing and using Commitizen.

Conventional commit messages as a global utility

Install commitizen globally, if you have not already.

npm install -g commitizen

Install your preferred commitizen adapter globally (for example cz-conventional-changelog).

npm install -g cz-conventional-changelog

Create a .czrc file in your home directory, with path referring to the preferred, globally-installed, commitizen adapter

echo '{ "path": "cz-conventional-changelog" }' > ~/.czrc

You are all set! Now cd into any git repository and use git cz instead of git commit, and you will find the commitizen prompt.

Pro tip: You can use all the git commit options with git cz. For example: git cz -a.

If your repository is a Node.js project, making it Commitizen friendly is super easy.

If your repository is already Commitizen friendly, the local commitizen adapter will be used, instead of globally installed one.

Commitizen for multi-repo projects

As a project maintainer of many projects, you may want to standardize on a single commit message format for all of them. You can create your own node module which acts as a front-end for Commitizen.

1. Create your own entry point script

// my-cli.js

#!/usr/bin/env node
"use strict";

const path = require('path');
const bootstrap = require('commitizen/dist/cli/git-cz').bootstrap;

  cliPath: path.join(__dirname, '../../node_modules/commitizen'),
  // this is new
  config: {
    "path": "cz-conventional-changelog"

2. Add the script to your package.json file

// package.json

  "name": "company-commit",
  "bin": "./my-cli.js",
  "dependencies": {
    "commitizen": "^2.7.6",
    "cz-conventional-changelog": "^1.1.5"

3. Publish it to npm and use it!

npm install --save-dev company-commit



We know that every project and build process has different requirements, so we've tried to keep Commitizen open for extension. You can do this by choosing from any of the pre-built adapters or even by building your own. Here are some of the great adapters available to you:

To create an adapter, just fork one of these great adapters and modify it to suit your needs. We pass you an instance of Inquirer.js, but you can capture input using whatever means necessary. Just call the commit callback with a string and we'll be happy. Publish it to npm, and you'll be all set!

Retrying failed commits

As of version 2.7.1, you may attempt to retry the last commit using the git cz --retry command. This can be helpful when you have tests set up to run via a git precommit hook. In this scenario, you may have attempted a Commitizen commit, painstakingly filled out all of the commitizen fields, but your tests fail. In previous Commitizen versions, after fixing your tests, you would be forced to fill out all of the fields again. Enter the retry command. Commitizen will retry the last commit that you attempted in this repo without you needing to fill out the fields again.

Please note that the retry cache may be cleared when upgrading Commitizen versions, upgrading adapters, or if you delete the commitizen.json file in your home or temp directory. Additionally, the commit cache uses the filesystem path of the repo, so if you move a repo or change its path, you will not be able to retry a commit. This is an edge case but might be confusing if you have scenarios where you are moving folders that contain repos.

It is important to note that if you are running cz from an npm script (let's say it is called commit) you will need to do one of the following:

  • Pass -- --retry as an argument for your script. i.e: npm run commit -- --retry
  • Use npx to find and call the cz executable directly. i.e: npx cz --retry

Note that the last two options do not require you to pass -- before the args but the first does.

Commitizen for project maintainers

As a project maintainer, making your repo Commitizen friendly allows you to select pre-existing commit message conventions or to create your own custom commit message convention. When a contributor to your repo uses Commitizen, they will be prompted for the correct fields at commit time.

Go further

Commitizen is great on its own, but it shines when you use it with some other amazing open source tools. Kent C. Dodds shows you how to accomplish this in his series, How to Write an Open Source JavaScript Library. Many of the concepts can be applied to non-JavaScript projects as well.


About Commitizen

Commitizen is an open source project that helps contributors be good open source citizens. It accomplishes this by prompting them to follow commit message conventions at commit time. It also empowers project maintainers to create or use predefined commit message conventions in their repos to better communicate their expectations to potential contributors.

Commitizen or Commit Hooks

Both! Commitizen is not meant to be a replacement for git commit hooks. Rather, it is meant to work side-by-side with them to ensure a consistent and positive experience for your contributors. Commitizen treats the commit command as a declarative action. The contributor is declaring that they wish to contribute to your project. It is up to you as the maintainer to define what rules they should be following.

We accomplish this by letting you define which adapter you'd like to use in your project. Adapters just allow multiple projects to share the same commit message conventions. A good example of an adapter is the cz-conventional-changelog adapter.

Related projects

Authors and Contributors

@JimTheDev (Jim Cummins, author) @kentcdodds @accraze @kytwb @Den-dp

Special thanks to @stevelacy, whose gulp-git project makes commitizen possible.

Download Details:

Author: Commitizen
Source Code: 
License: MIT license

#javascript #node #git #cli 

CZ-cli: The Commitizen Command Line Utility

Remove Unwanted Files & Directories From Your Node_modules Folder


Remove unwanted files and directories from your node_modules folder 

This documentation is for ModClean 2.x which requires Node v6.9+, if you need to support older versions, use ModClean 1.3.0 instead.

ModClean is a utility that finds and removes unnecessary files and folders from your node_modules directory based on predefined and custom glob patterns. This utility comes with both a CLI and a programmatic API to provide customization for your environment. ModClean is used and tested in an Enterprise environment on a daily basis.


There are a few different reasons why you would want to use ModClean:

  • Commiting Modules. Some environments (especially Enterprise), it's required to commit the node_modules directory with your application into version control. This is due to compatibility, vetting and vunerability scanning rules for open source software. This can lead to issues with project size, checking out/pulling changes and the infamous 255 character path limit if you're unlucky enough to be on Windows or SVN.
  • Wasted space on your server. Why waste space on your server with files not needed by you or the modules?
  • Packaged applications. If you're required to package your application, you can reduce the size of the package quickly by removing unneeded files.
  • Compiled applications. Other tools like, NW.js and Electron make it easy to create cross-platform desktop apps, but depending on the modules, your app can become huge. Reduce down the size of the compiled application before shipping and make it faster for users to download.
  • Save space on your machine. Depending on the amount of global modules you have installed, you can reduce their space by removing those gremlin files.
  • and much more!

The :cake: is a lie, but the Benchmarks are not.


New! In ModClean 2.0.0, patterns are now provided by plugins instead of a static patterns.json file as part of the module. By default, ModClean comes with modclean-patterns-default installed, providing the same patterns as before. You now have the ability to create your own patterns plugins and use multiple plugins to clean your modules. This allows flexibility with both the programmatic API and CLI.

ModClean scans the node_modules directory of your choosing, finding all files and folders that match the defined patterns and deleting them. Both the CLI and the programmatic API provides all the options needed to customize this process to your requirements. Depending on the number of modules your app requires, files can be reduced anywhere from hundreds to thousands and disk space can be reduced considerably.

(File and disk space reduction can also be different between the version of NPM and Operating System)

IMPORTANT This module has been heavily tested in an enterprise environment used for large enterprise applications. The provided patterns in modclean-patterns-default have worked very well when cleaning up useless files in many popular modules. There are hundreds of thousands of modules in NPM and I cannot simply cover them all. If you are using ModClean for the first time on your application, you should create a copy of the application so you can ensure it still runs properly after running ModClean. The patterns are set in a way to ensure no crutial module files are removed, although there could be one-off cases where a module could be affected and that's why I am stressing that testing and backups are important. If you find any files that should be removed, please create a pull request to modclean-patterns-default or create your own patterns plugin to share with the community.

Removal Benchmark

So how well does this module work? If we npm install sails and run ModClean on it, here are the results:

All tests ran on macOS 10.12.3 with Node v6.9.1 and NPM v4.0.5

Using Default Safe Patterns

modclean -n default:safe or modclean

 Total FilesTotal FoldersTotal Size
Before ModClean16,1791,94171.24 MB
After ModClean12,1921,50359.35 MB
Reduced3,98743811.88 MB

Using Safe and Caution Patterns

modclean -n default:safe,default:caution

 Total FilesTotal FoldersTotal Size
Before ModClean16,1791,94171.24 MB
After ModClean11,9411,47355.28 MB
Reduced4,23846815.95 MB

Using Safe, Caution and Danger Patterns

modclean --patterns="default:*"

 Total FilesTotal FoldersTotal Size
Before ModClean16,1791,94171.24 MB
After ModClean11,6841,44451.76 MB
Reduced4,49549719.47 MB

That makes a huge difference in the amount of files and disk space.

View additional benchmarks on the Wiki: Benchmarks. If you would like to run some of your own benchmarks, you can use modclean-benchmark.


Install locally

npm install modclean --save

Install globally (CLI)

npm install modclean -g

Read the CLI Documentation

Read the API Documentation

Read the Custom Patterns Plugin Documentation


If you find any bugs with either ModClean or the CLI Utility, please feel free to open an issue. Any feature requests may also be poseted in the issues.

Download Details:

Author: ModClean
Source Code: 
License: MIT license

#javascript #node #cli #clean 

Remove Unwanted Files & Directories From Your Node_modules Folder
Mike  Kozey

Mike Kozey


Accemus: A CLI tool for Fetching Substitution Plans From DSB/DSBMobile


Tool for fetching substitution plans from DSB/DSBMobile and parsing Untis HTML, as well as debugging dsbuntis.

More information:

  • List your substitution plans in the current dsbuntis JSON format:

accemus {{187801}} {{public}}

  • List your substitution plans and print all HTTP requests made while doing so:

accemus --log-requests {{id}} {{password}}

  • Only log into DSB and print the session:

accemus --login-only {{id}} {{password}}

  • Get the Timetable JSON of your substitution plans from DSB:

accemus --timetable-json {{id}} {{password}}

  • Try to get substitutions from another DSB server and, if it fails, print full stack traces for debugging:

accemus --endpoint={{}} --preview-endpoint={{}} --stack-traces {{id}} {{password}}


dart pub global activate accemus

Use this package as an executable

Install it

You can install the package from the command line:

dart pub global activate accemus

Use it

The package has the following executables:

$ accemus

Download Details:

Author: Ampless
Source Code: 
License: MPL-2.0 license

#flutter #dart #cli 

Accemus: A CLI tool for Fetching Substitution Plans From DSB/DSBMobile

Tool To Generate Different Types Of React Components From The Terminal

(Introduction article v1) 🛠WIP v2

How much time do you spend copying and pasting the component folder to create a new one ?
This is a tool to generate different types of React components from the terminal.

Available extension 

What you can do with this tool ?



$ npm install -g create-component-app


$ cd ~/my-projects
$ create-component-app

Create your components guided from terminal with a lot of choices

  • Create different kind of components:
    • stateless
    • class
    • pure
    • custom
  • Set name of the new component
  • Integrate connect function of redux
  • Include an index file
  • Set a different component extension
    • js
    • jsx
  • Set a different style extension
    • css
    • scss
    • sass
    • less
  • Include a storybook file
  • Include a test file (with enzyme)
  • Set the destionation path of the new component

You can create a configuration file in your current project directory

Create-component-app uses cosmiconfig for configuration file support. This means you can configure cca via:

  • A .ccarc file, written in YAML or JSON, with optional extensions: .yaml/.yml/.json.
  • A cca.config.js file that exports an object.
  • A "cca" key in your package.json file.

The configuration file will be resolved starting from the root of your project, and searching up the file tree until a config file is (or isn't) found.

Basic Configuration

An example configuration file can be found here: .ccarc.example, you can use this file by copying it to the root of your project.

Currently supported options are:

typeDefault type of the component ["stateless", "class", "pure"]
templatesDirPathDefault path to get the templates from the custom templates folder
pathDefault path to create component file and folder
jsExtensionDefault extension for your javascript file ["js", "jsx"]
cssExtensionDefault extension for your css file ["css", "scss", "sass", "less", false]. Set to false if you don't want a style file
includeTestsDefault flag to include a test file in the folder [true, false]
includeStoriesDefault flag to include a storybook file in the folder [true, false]
indexFileDefault flag to create an index file in the folder [false, true]
connectedDefault flag to integrate connect redux in the index file [false, true]
componentMethodsOnly for "class" and "pure", insert method inside the component (i.e. ["componentDidMount", "shouldComponentUpdate", "onClick"]). render and constructor will be always included.
fileNamesChoose the specific filename for your component's file. (COMPONENT_NAME will be replaced)
fileNames.testFileNamespecify the file name of your test file
fileNames.componentFileNamespecify the component file name
fileNames.styleFileNamespecify the style file name !!IMPORTANT: Include cssExtension.

You can also pass a config file

  1. Create a JSON file config.json:
  2. and pass the path to config param
$ create-component-app --config path/to/your/config.json

Passing a config file via the CLI overrides the configuration file loaded by cosmiconfig

You can pass params from the command line

$ create-component-app --path path/destionation

Passing a param via the CLI overrides the configuration file loaded by cosmiconfig

You can use your own custom templates

Simple steps to create your own templates docs/custom-templates

You can use templates from the community

Now, the first question that you receive is Do you wanna choose a template? if you answer yes, you can see the list of templates from the community.


  • (Optional) Add to the settings templatesDirPath - a custom path to the user custom templates folder.
  • (Optional) Add to the settings templates - a list of used templates (with a default) to filter the list
  • (Optional) The user can choose between the available templates or use create-component-app -t templateName


Now, the community can offer their templates! How?

Check the issue list to contribute on some activities or to advice new features! The library is open to everybody, contribute improve your skills.

create-component-app is maintained under the Semantic Versioning guidelines.

Use npm run watch while coding.


Download Details:

Author: CVarisco
Source Code: 
License: MIT license

#javascript #create #component #react #cli 

Tool To Generate Different Types Of React Components From The Terminal
Mike  Kozey

Mike Kozey


Dcli_scripts: A Collection Of Cli Scripts That Do Odd Jobs for Me

An eclectic collection of CLI scripts that help me manage my dev environment.

Some of my favourites:


dportfinds what process has a tcp port open.dport 80
cleancleans out stale docker and git files and highlights large directories.clean
dmailhoginstalls and starts/stops mailhog.dmailhog | dmailhog --shutdown
dmysqlBackup/Restore and connects you to a mysql cli pulling settings (username/password) from a per database local settings file.dmysql mydb backup | dmysql mydb restore <path>
dwhichan improved which command that also highlights invalid paths 
gitgcruns garbage collection on all your git projects. 
hogfinds system resource hogs. 
ipaddrshows the ip address on your local machine. 
docker_pushbuilds a docker file and pushes it to docker.hub. 
kill_tomcatkills any java tomcat instances. 
pub_get_allrecursively runs dart pub get 
hex_dumpdumps the contents of a file in hex and ascii. 
find_textFind a file that contains the given text 


Recursively searches for a file with a matching glob pattern


dfind '*.dart'


Prints out the name of the process that is listening on the passed tcp port.


dport 80


Allows you to store the username/password for a mysql database in a configure file and then run a number of mysql commands against that database withouth continuously re-entering the username/password.


To create a configuration file for a given database run:

dmysql config <dbname>
 host: <enter host>
 port: <enter port>
 user: <enter user>
 password: <enter password>

Once you have created a config you can run any of the following commands.


Backup the database via:

dmysql backup <path to backup file>


Restore a database

WARNING: this will delete you existing schema.

dmysql restore <path to backup file>


Connect to the mysql cli

dmysql cli <database name>


List each file that matches the pass text. The search is run recursively from the current directory.


Search all dart files for a line that contains 'final String'.

find_text 'final String' '*.dart'


The dcli_scripts package also includes some handy apis.


Designed to build and publish a docker image which contains a Dart project built with dcli.

The api assumes that you are cloning a git repo into your docker image and that you need to rebuild you image each time the git repo changes.

The api allows you to rebuild you docker image from the clone step rather than having to rebuild the entire docker image.

Your docker file should have the following line just before the package's git clone line.

RUN mkdir -p /BUILD_TOKEN/

We will run a:

  • dcli pack
  • git add *
  • git commit -m 'release'
  • git push

You need to provide the path to your dockerfile via [pathToDockerFile].

The docker tag will be generated from your pubspec.yaml and the [repository] argument in the form:


So if you pass in 'noojee' as the repository and your package is dcli then you might get:


If you pass the [clean] = true then the image will be rebuilt from scratch.

If you pass the [fresh] = true then the Docker image will be rebuilt from the line that contains BUILD_TOKEN.

We search for the BUILD_TOKEN line in your docker file and update the token UUID. This will cause the docker image to be rebuilt from the BUILD_TOKEN line. This can be used if you need to re-clone a git repo (or any similar action).

By default the image will be pushed to docker hub unless you pass [push] = false.

By default we ask you to confirm the build process. Pass [confirm] = false to skip the question.

If you pass [pack] = true then the 'dcli pack' command will be run and any changes committed to your git repo before the build starts. If you pass [pack] = true then [fresh] will automatically be set to true to force a fresh git clone.

Here is an example Dockerfile that builds for an arm64 target ( I use this for raspberry pi testing.)

# used to build the dart exes in a docker arm image
# trying to build dart execs on a pi is just to slow
# hence we do the build in a docker image on our
# development box.

# docker image instructions came from.

# FROM balenalib/raspberrypi4-64-ubuntu:latest
FROM balenalib/raspberrypi4-64-ubuntu-openjdk:latest
# replace this with your application

# install build tools
# && apt install --no-install-recommends -y openjdk-8-jdk-headless maven git \

RUN apt update \
    && apt install --no-install-recommends -y \
    wget \
    git \
    maven \
    unzip \
    && rm -rf /var/lib/apt/lists/*

RUN wget
RUN unzip

# add dart to the path.
ENV PATH="$PATH:/dart-sdk/bin"

RUN mkdir -p /BUILD_TOKEN/
RUN git clone

WORKDIR IrrigationForPi/build_tools

RUN dart pub get
RUN dart bin/pig_build.dart --current --no-tools --no-full

Use this package as an executable

Install it

You can install the package from the command line:

dart pub global activate dcli_scripts

Use it

The package has the following executables:

$ add_gnome_launcher
$ artifactory
$ bobthefish
$ certbot_renew
$ clean
$ dcompress
$ dcopydir
$ denv
$ dfind
$ dmailhog
$ dmysql
$ dnsflush
$ docker_dcli
$ docker_push
$ downit
$ dpath
$ dport
$ dreplace
$ dsetver
$ dsort
$ dvirtualbox
$ dwc
$ dwhich
$ dzfs_clean
$ eclipse_launcher
$ find_old_dart_packages
$ find_text
$ gitadd_team_to_repository
$ gitcreation_date
$ gitgc
$ gitsyncfork
$ gituncommited
$ gitupdate_remote
$ hexdump
$ hog
$ install_flutter
$ ipaddr
$ kill_tomcat
$ pub_get_all

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add dcli_scripts

With Flutter:

 $ flutter pub add dcli_scripts

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  dcli_scripts: ^1.3.0

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:dcli_scripts/dcli_scripts.dart';

Download Details:

Author: onepub-dev
Source Code: 
License: MIT license

#flutter #dart #cli #script 

Dcli_scripts: A Collection Of Cli Scripts That Do Odd Jobs for Me

Ponzu: Headless CMS with Automatic JSON API


Ponzu is a powerful and efficient open-source HTTP server framework and CMS. It provides automatic, free, and secure HTTP/2 over TLS (certificates obtained via Let's Encrypt), a useful CMS and scaffolding to generate content editors, and a fast HTTP API on which to build modern applications.

Ponzu is released under the BSD-3-Clause license (see LICENSE). (c) Boss Sauce Creative, LLC


With the rise in popularity of web/mobile apps connected to JSON HTTP APIs, better tools to support the development of content servers and management systems are necessary. Ponzu fills the void where you want to reach for Wordpress to get a great CMS, or Rails for rapid development, but need a fast JSON response in a high-concurrency environment.

Because you want to turn this:

$ ponzu gen content song title:"string" artist:"string" rating:"int" opinion:"string":richtext spotify_url:"string"

Into this:

Generated content/song.go

What's inside

  • Automatic & Free SSL/TLS1
  • HTTP/2 and Server Push
  • Rapid development with CLI-controlled code generators
  • User-friendly, extensible CMS and administration dashboard
  • Simple deployment - single binary + assets, embedded DB (BoltDB)
  • Fast, helpful framework while maintaining control

1 TLS:

  • Development: self-signed certificates auto-generated
  • Production: auto-renewing certificates fetched from Let's Encrypt


For more detailed documentation, check out the docs


$ go get -u


Go 1.8+

Since HTTP/2 Server Push is used, Go 1.8+ is required. However, it is not required of clients connecting to a Ponzu server to make HTTP/2 requests.


$ ponzu command [flags] <params>



Creates a project directory of the name supplied as a parameter immediately following the 'new' option in the $GOPATH/src directory. Note: 'new' depends on the program 'git' and possibly a network connection. If there is no local repository to clone from at the local machine's $GOPATH, 'new' will attempt to clone the '' package from over the network.


$ ponzu new
> New ponzu project created at $GOPATH/src/

Errors will be reported, but successful commands return nothing.

generate, gen, g

Generate boilerplate code for various Ponzu components, such as content.


            generator      struct fields and built-in types...
             |              |
             v              v    
$ ponzu gen content review title:"string" body:"string":richtext rating:"int"
                     ^                                   ^
                     |                                   |
                    struct type                         (optional) input view specifier

The command above will generate the file content/review.go with boilerplate methods, as well as struct definition, and corresponding field tags like:

type Review struct {
    Title  string   `json:"title"`
    Body   string   `json:"body"`
    Rating int      `json:"rating"`

The generate command will intelligently parse more sophisticated field names such as 'field_name' and convert it to 'FieldName' and vice versa, only where appropriate as per common Go idioms. Errors will be reported, but successful generate commands return nothing.

Input View Specifiers (optional)

The CLI can optionally parse a third parameter on the fields provided to generate the type of HTML view an editor field is presented within. If no third parameter is added, a plain text HTML input will be generated. In the example above, the argument shown as body:string:richtext would show the Richtext input instead of a plain text HTML input (as shown in the screenshot). The following input view specifiers are implemented:

CLI parameterGenerates
customgenerates a pre-styled empty div to fill with HTML
hiddeneditor.Input() + uses type=hidden
input, texteditor.Input()


From within your Ponzu project directory, running build will copy and move the necessary files from your workspace into the vendored directory, and will build/compile the project to then be run.

Optional flags:

  • --gocmd sets the binary used when executing go build within ponzu build step


$ ponzu build
$ ponzu build --gocmd=go1.8rc1 # useful for testing

Errors will be reported, but successful build commands return nothing.


Starts the HTTP server for the JSON API, Admin System, or both. The segments, separated by a comma, describe which services to start, either 'admin' (Admin System / CMS backend) or 'api' (JSON API), and, optionally, if the server should utilize TLS encryption - served over HTTPS, which is automatically managed using Let's Encrypt (

Optional flags:

  • --port sets the port on which the server listens for HTTP requests [defaults to 8080]
  • --https-port sets the port on which the server listens for HTTPS requests [defaults to 443]
  • --https enables auto HTTPS management via Let's Encrypt (port is always 443)
  • --dev-https generates self-signed SSL certificates for development-only (port is 10443)


$ ponzu run
$ ponzu run --port=8080 --https admin,api
$ ponzu run admin
$ ponzu run --port=8888 api
$ ponzu run --dev-https

Defaults to $ ponzu run --port=8080 admin,api (running Admin & API on port 8080, without TLS)

Note: Admin and API cannot run on separate processes unless you use a copy of the database, since the first process to open it receives a lock. If you intend to run the Admin and API on separate processes, you must call them with the 'ponzu' command independently.


Will backup your own custom project code (like content, add-ons, uploads, etc) so we can safely re-clone Ponzu from the latest version you have or from the network if necessary. Before running $ ponzu upgrade, you should update the ponzu package by running $ go get -u


$ ponzu upgrade

add, a

Downloads an add-on to GOPATH/src and copies it to the Ponzu project's ./addons directory. Must be called from within a Ponzu project directory.


$ ponzu add

Errors will be reported, but successful add commands return nothing.

version, v

Prints the version of Ponzu your project is using. Must be called from within a Ponzu project directory. By passing the --cli flag, the version command will print the version of the Ponzu CLI you have installed.


$ ponzu version
> Ponzu v0.8.2
$ ponzu version --cli
> Ponzu v0.9.2


  1. Checkout branch ponzu-dev
  2. Make code changes
  3. Test changes to ponzu-dev branch
    • make a commit to ponzu-dev
    • to manually test, you will need to use a new copy (ponzu new path/to/code), but pass the --dev flag so that ponzu generates a new copy from the ponzu-dev branch, not master by default (i.e. $ponzu new --dev /path/to/code)
    • build and run with $ ponzu build and $ ponzu run
  4. To add back to master:
    • first push to origin ponzu-dev
    • create a pull request
    • will then be merged into master

A typical contribution workflow might look like:

# clone the repository and checkout ponzu-dev
$ git clone path/to/local/ponzu # (or your fork)
$ git checkout ponzu-dev

# install ponzu with go get or from your own local path
$ go get
# or
$ cd /path/to/local/ponzu 
$ go install ./...

# edit files, add features, etc
$ git add -A
$ git commit -m 'edited files, added features, etc'

# now you need to test the feature.. make a new ponzu project, but pass --dev flag
$ ponzu new --dev /path/to/new/project # will create $GOPATH/src/path/to/new/project

# build & run ponzu from the new project directory
$ cd /path/to/new/project
$ ponzu build && ponzu run

# push to your origin:ponzu-dev branch and create a PR at ponzu-cms/ponzu
$ git push origin ponzu-dev
# ... go to and create a PR

Note: if you intend to work on your own fork and contribute from it, you will need to also pass --fork=path/to/your/fork (using OS-standard filepath structure), where path/to/your/fork must be within $GOPATH/src, and you are working from a branch called ponzu-dev.

For example:

# ($GOPATH/src is implied in the fork path, do not add it yourself)
$ ponzu new --dev /path/to/new/project



The Go gopher was designed by Renee French. ( The design is licensed under the Creative Commons 3.0 Attribution license. Read this article for more details:

The Go gopher vector illustration by Hugo Arganda @argandas (

"Gotoro", the sushi chef, is a modification of Hugo Arganda's illustration by Steve Manuel (

Watch the video introduction

Download Details:

Author: Ponzu-cms
Source Code: 
License: BSD-3-Clause license

#go #golang #cms #api #cli 

Ponzu: Headless CMS with Automatic JSON API
Elian  Harber

Elian Harber


Cli/cli: GitHub’s Official Command Line tool

GitHub CLI

gh is GitHub on the command line. It brings pull requests, issues, and other GitHub concepts to the terminal next to where you are already working with git and your code.

screenshot of gh pr status

GitHub CLI is available for repositories hosted on and GitHub Enterprise Server 2.20+, and to install on macOS, Windows, and Linux.


See the manual for setup and usage instructions.


If anything feels off, or if you feel that some functionality is missing, please check out the contributing page. There you will find instructions for sharing your feedback, building the tool locally, and submitting pull requests to the project.



gh is available via Homebrew, MacPorts, Conda, Spack, and as a downloadable binary from the releases page.


brew install ghbrew upgrade gh


sudo port install ghsudo port selfupdate && sudo port upgrade gh


conda install gh --channel conda-forgeconda update gh --channel conda-forge

Additional Conda installation options available on the gh-feedstock page.


spack install ghspack uninstall gh && spack install gh

Linux & BSD

gh is available via Homebrew, Conda, Spack, and as downloadable binaries from the releases page.

For instructions on specific distributions and package managers, see Linux & BSD installation.


gh is available via WinGet, scoop, Chocolatey, Conda, and as downloadable MSI.


winget install --id GitHub.cliwinget upgrade --id GitHub.cli


scoop install ghscoop update gh


choco install ghchoco upgrade gh

Signed MSI

MSI installers are available for download on the releases page.

GitHub Actions

GitHub CLI comes pre-installed in all GitHub-Hosted Runners.

Other platforms

Download packaged binaries from the releases page.

Build from source

See here on how to build GitHub CLI from source.

Comparison with hub

For many years, hub was the unofficial GitHub CLI tool. gh is a new project that helps us explore what an official GitHub CLI tool can look like with a fundamentally different design. While both tools bring GitHub to the terminal, hub behaves as a proxy to git, and gh is a standalone tool. Check out our more detailed explanation to learn more.

Download Details:

Author: cli
Source Code: 
License: MIT license

#go #golang #cli #github 

Cli/cli: GitHub’s Official Command Line tool
Elian  Harber

Elian Harber


Glamour: Stylesheet-based Markdown Rendering for Your CLI Apps


Stylesheet-based markdown rendering for your CLI apps.

Glamour dark style example

glamour lets you render markdown documents & templates on ANSI compatible terminals. You can create your own stylesheet or simply use one of the stylish defaults.


import ""

in := `# Hello World

This is a simple example of Markdown rendering with Glamour!
Check out the [other examples]( too.


out, err := glamour.Render(in, "dark")

Hello World example

Custom Renderer

import ""

r, _ := glamour.NewTermRenderer(
    // detect background color and pick either the default dark or light theme
    // wrap output at specific width

out, err := r.Render(in)


You can find all available default styles in our gallery. Want to create your own style? Learn how!

There are a few options for using a custom style:

  1. Call glamour.Render(inputText, "desiredStyle")
  2. Set the GLAMOUR_STYLE environment variable to your desired default style or a file location for a style and call glamour.RenderWithEnvironmentConfig(inputText)
  3. Set the GLAMOUR_STYLE environment variable and pass glamour.WithEnvironmentConfig() to your custom renderer

Glamourous Projects

Check out these projects, which use glamour:

  • Glow, a markdown renderer for the command-line.
  • GitHub CLI, GitHub’s official command line tool.
  • GLab, an open source GitLab command line tool.
  • Meteor, an easy-to-use, plugin-driven metadata collection framework.

Download Details:

Author: Charmbracelet
Source Code: 
License: MIT license

#go #golang #markdown #cli 

Glamour: Stylesheet-based Markdown Rendering for Your CLI Apps
Elian  Harber

Elian Harber


FM: A Terminal Based File Manager


Keep those files organized

About The Project

A terminal based file manager

default screenshot



curl -sfL | sh


go install


Install through the Arch User Repository with your favorite AUR helper. There are currently two possible packages:

  • fm-git: Builds the package from the main branch
paru -S fm-git
  • fm-bin: Uses the github release package
paru -S fm-bin


  • Double pane layout
  • File icons
  • Layout adjusts to terminal resize
  • Syntax highlighting for source code with customizable themes using styles from chroma (dracula, monokai etc.)
  • Render pretty markdown
  • Mouse support
  • Themes (default, gruvbox, nord)
  • Render PNG, JPG and JPEG as strings
  • Colors adapt to terminal background, for syntax highlighting to work properly on light/dark terminals, set the appropriate themes in the config file
  • Open selected file in editor set in EDITOR environment variable
  • Copy selected directory items path to the clipboard
  • Read PDF files









  • fm will start fm in the current directory
  • fm update will update fm to the latest version
  • fm --start-dir=/some/start/dir will start fm in the specified directory
  • fm --selection-path=/tmp/tmpfile will write the selected items path to the selection path when pressing E and exit fm


h or leftPaginate to the left
or downMove down in the file tree or scroll pane down
k or upMove up in the file tree or scroll pane up
l or rightPaginate to the right
GJump to bottom of file tree or pane
gJump to top of file tree or pane
~Go to home directory
RGo to the root directory
.Toggle hidden files and directories
qExit if command bar is not open
tabToggle between panes
escBlur filetree input
zCreate a zip file of the currently selected directory item
uUnzip a zip file
cCreate a copy of a file or directory
xDelete the currently selected file or directory
nCreate a new file in the current directory
NCreate a new directory in the current directory
rRename the currently selected file or directory
mMove the currently selected file or directory
eOpen in editor set in EDITOR environment variable
yCopy selected directory items path to the clipboard
/Filter the current directory with a term
?Toggle filetree full help menu
ctrl+rReload config


A config file will be generated when you first run fm. Depending on your operating system it can be found in one of the following locations:

  • macOS: ~/Library/Application\ Support/fm/config.yml
  • Linux: ~/.config/fm/config.yml
  • Windows: C:\Users\me\AppData\Roaming\fm\config.yml

It will include the following default settings:

  borderless: false
  enable_logging: false
  pretty_markdown: true
  show_icons: true
  start_dir: .
  app_theme: default
    dark: dracula
    light: pygments

Local Development

Follow the instructions below to get setup for local development

Clone the repo

git clone



Build a binary

make build


Built With

Download Details:

Author: knipferrc
Source Code: 
License: MIT license

#go #golang #cli #terminal 

FM: A Terminal Based File Manager
Elian  Harber

Elian Harber


DNS53: Dynamic DNS within Amazon Route53


Dynamic DNS within Amazon Route 53. Expose your EC2 quickly, easily and privately within a Route 53 Private Hosted Zone (PHZ).

Easily collaborate with a colleague by exposing your EC2 within a team VPC. You could even hook up a locally running application to a local k3d cluster using an ExternalName service during development. Once your EC2 is exposed, control how it is accessed through your EC2 security groups.

Written in Go, dns53 is incredibly small and easy to install.



Check out the latest documentation

Download Details:

Author: Purpleclay
Source Code: 
License: MIT license

#go #golang #dns #cli #aws 

DNS53: Dynamic DNS within Amazon Route53
Elian  Harber

Elian Harber


Clidle: Play Wordle Over SSH


Wordle, now over SSH.

Try it:

ssh -p 3000

Or, run it locally:

go install

How to play

You have 6 attempts to guess the correct word. Each guess must be a valid 5 letter word.

After submitting a guess, the letters will turn green, yellow, or gray.

  • Green: The letter is correct, and is in the correct position.
  • Yellow: The letter is present in the solution, but is in the wrong position.
  • Gray: The letter is not present in the solution.


Your final score is based on how many guesses it took to arrive at the solution:


Download Details:

Author: Ajeetdsouza
Source Code: 
License: MIT license

#go #golang #cli #ssh 

Clidle: Play Wordle Over SSH