Lawrence  Lesch

Lawrence Lesch


Knip: Find Unused Files, Dependencies & Exports in Your JS/TS Project

✂️ Knip

Knip finds unused files, dependencies and exports in your JavaScript and TypeScript projects. Less code and dependencies leads to improved performance, less maintenance and easier refactorings.

export const myVar = true;

ESLint handles files in isolation, so it does not know whether myVar is actually used somewhere else. Knip lints the project as a whole, and finds unused exports, files and dependencies.

It's only human to forget removing things that you no longer use. But how do you find out? Where to even start finding things that can be removed?

The dots don't connect themselves. This is where Knip comes in:

  •  Finds unused files, dependencies and exports
  •  Finds used dependencies not listed in package.json
  •  Finds duplicate exports
  •  Finds unused members of classes and enums
  •  Built-in support for monorepos/workspaces
  •  Growing list of built-in plugins
  •  Checks npm scripts for used and unlisted dependencies
  •  Supports JavaScript (without tsconfig.json, or TypeScript allowJs: true)
  •  Features multiple reporters and supports custom reporters
  •  Run Knip as part of your CI environment to detect issues and prevent regressions

Knip shines in both small and large projects. It's a fresh take on keeping your projects clean & tidy!

An orange cow with scissors, Van Gogh style “An orange cow with scissors, Van Gogh style” - generated with OpenAI

Migrating to v1.0.0

When coming from version v0.13.3 or before, please see migration to v1.

Announcement: Knip v2

The next major release is upcoming. Please see for the full story. Use npm install knip@next to try it out if you're curious! No changes in configuration necessary. Find the updated documentation at


Are you seeing false positives? Please report them by opening an issue in this repo. Bonus points for linking to a public repository using Knip, or even opening a pull request with a directory and example files in test/fixtures. Correctness and bug fixes have priority over performance and new features.

Also see the FAQ.


npm install -D knip

Knip supports LTS versions of Node.js, and currently requires at least Node.js v16.17 or v18.6. Knip is cutting edge!


Knip has good defaults and you can run it without any configuration, but especially larger projects get more out of Knip with a configuration file (or a knip property in package.json). Let's name this file knip.json with these contents (you might want to adjust right away for your project):

  "$schema": "",
  "entry": ["src/index.ts"],
  "project": ["src/**/*.ts"]

The entry files target the starting point(s) to resolve the rest of the imported code. The project files should contain all files to match against the files resolved from the entry files, including potentially unused files.

Use knip.ts with TypeScript if you prefer:

import type { KnipConfig } from 'knip';

const config: KnipConfig = {
  entry: ['src/index.ts'],
  project: ['src/**/*.ts'],

export default config;

If you have, please see workspaces & monorepos.

Then run the checks with npx knip. Or first add this script to package.json:

  "scripts": {
    "knip": "knip"

Use npm run knip to analyze the project and output unused files, dependencies and exports. Knip works just fine with yarn or pnpm as well.

Command-line options

$ npx knip --help
✂️  Find unused files, dependencies and exports in your JavaScript and TypeScript projects

Usage: knip [options]

  -c, --config [file]      Configuration file path (default: [.]knip.json[c], knip.js, knip.ts or package.json#knip)
  -t, --tsConfig [file]    TypeScript configuration path (default: tsconfig.json)
  --production             Analyze only production source files (e.g. no tests, devDependencies, exported types)
  --strict                 Consider only direct dependencies of workspace (not devDependencies, not other workspaces)
  --workspace              Analyze a single workspace (default: analyze all configured workspaces)
  --include-entry-exports  Include unused exports in entry files (without `@public`)
  --ignore                 Ignore files matching this glob pattern, can be repeated
  --no-gitignore           Don't use .gitignore
  --include                Report only provided issue type(s), can be comma-separated or repeated (1)
  --exclude                Exclude provided issue type(s) from report, can be comma-separated or repeated (1)
  --dependencies           Shortcut for --include dependencies,unlisted
  --exports                Shortcut for --include exports,nsExports,classMembers,types,nsTypes,enumMembers,duplicates
  --no-progress            Don't show dynamic progress updates
  --reporter               Select reporter: symbols, compact, codeowners, json (default: symbols)
  --reporter-options       Pass extra options to the reporter (as JSON string, see example)
  --no-exit-code           Always exit with code zero (0)
  --max-issues             Maximum number of issues before non-zero exit code (default: 0)
  --debug                  Show debug output
  --debug-file-filter      Filter for files in debug output (regex as string)
  --performance            Measure running time of expensive functions and display stats table
  --h, --help              Print this help text
  --V, version             Print version

(1) Issue types: files, dependencies, unlisted, exports, nsExports, classMembers, types, nsTypes, enumMembers, duplicates


$ knip
$ knip --production
$ knip --workspace packages/client --include files,dependencies
$ knip -c ./config/knip.json --reporter compact
$ knip --reporter codeowners --reporter-options '{"path":".github/CODEOWNERS"}'
$ knip --debug --debug-file-filter '(specific|particular)-module'

More documentation and bug reports:


Here's an example run using the default reporter:

example output of dependencies

This example shows more output related to unused and unlisted dependencies:

example output of dependencies

Reading the report

The report contains the following types of issues:

  • Unused files: did not find references to this file
  • Unused dependencies: did not find references to this dependency
  • Unlisted or unresolved dependencies: used dependencies, but not listed in package.json (1)
  • Unused exports: did not find references to this exported variable
  • Unused exports in namespaces: did not find direct references to this exported variable (2)
  • Unused exported types: did not find references to this exported type
  • Unused exported types in namespaces: did not find direct references to this exported variable (2)
  • Unused exported enum members: did not find references to this member of the exported enum
  • Unused exported class members: did not find references to this member of the exported class
  • Duplicate exports: the same thing is exported more than once

When an issue type has zero issues, it is not shown.

(1) This includes imports that could not be resolved.

(2) The variable or type is not referenced directly, and has become a member of a namespace. Knip can't find a reference to it, so you can probably remove it.

Output filters

You can --include or --exclude any of the types to slice & dice the report to your needs. Alternatively, they can be added to the configuration (e.g. "exclude": ["dependencies"]).

Knip finds issues of type files, dependencies, unlisted and duplicates very fast. Finding unused exports requires deeper analysis (exports, nsExports, classMembers, types, nsTypes, enumMembers).

Use --include to report only specific issue types (the following example commands do the same):

knip --include files --include dependencies
knip --include files,dependencies

Use --exclude to ignore reports you're not interested in:

knip --include files --exclude classMembers,enumMembers

Use --dependencies or --exports as shortcuts to combine groups of related types.

Still not happy with the results? Getting too much output/false positives? The FAQ may be useful. Feel free to open an issue and I'm happy to look into it. Also see the next section on how to ignore certain false positives:


There are a few ways to tell Knip to ignore certain packages, binaries, dependencies and workspaces. Some examples:

  "ignore": ["**/*.d.ts", "**/fixtures"],
  "ignoreBinaries": ["zip", "docker-compose"],
  "ignoreDependencies": ["hidden-package"],
  "ignoreWorkspaces": ["packages/deno-lib"]

Now what?

This is the fun part! Knip, knip, knip ✂️

As always, make sure to backup files or use Git before deleting files or making changes. Run tests to verify results.

  • Unused files can be removed.
  • Unused dependencies can be removed from package.json.
  • Unlisted dependencies should be added to package.json.
  • Unused exports and types: remove the export keyword in front of unused exports. Then you can see whether the variable or type is used within the same file. If this is not the case, it can be removed.
  • Duplicate exports can be removed so they're exported only once.

🔁 Repeat the process to reveal new unused files and exports. Sometimes it's so liberating to remove things!

Workspaces & Monorepos

Workspaces and monorepos are handled out-of-the-box by Knip. Every workspace that is part of the Knip configuration will be part of the analysis. Here's an example:

  "ignoreWorkspaces": ["packages/ignore-me"],
  "workspaces": {
    ".": {
      "entry": "src/index.ts",
      "project": "src/**/*.ts"
    "packages/*": {
      "entry": "{index,cli}.ts",
      "project": "**/*.ts"
    "packages/my-lib": {
      "entry": "main.js"

Note that if you have a root workspace, it must be under workspaces and have the "." key like in the example.

Knip supports workspaces as defined in three possible locations:

  • In the workspaces array in package.json.
  • In the workspaces.packages array in package.json.
  • In the packages array in pnpm-workspace.yaml.

Every directory with a match in workspaces of knip.json is part of the analysis.

Extra "workspaces" not configured as a workspace in the root package.json can be configured as well, Knip is happy to analyze unused dependencies and exports from any directory with a package.json.

Here's some example output when running Knip in a workspace:

example output in workspaces


Knip contains a growing list of plugins:

Plugins are automatically activated. Each plugin is automatically enabled based on simple heuristics. Most of them check whether one or one of a few (dev) dependencies are listed in package.json. Once enabled, they add a set of configuration and/or entry files for Knip to analyze. These defaults can be overriden.

Most plugins use one or both of the following file types:

  • config - custom dependency resolvers are applied to the config files
  • entry - files to include with the analysis of the rest of the source code

See each plugin's documentation for its default values.


Plugins may include config files. They are parsed by the plugin's custom dependency resolver. Here are some examples to get an idea of how they work and why they are needed:

  • The eslint plugin tells Knip that the "prettier" entry in the array of plugins means that the eslint-plugin-prettier dependency should be installed. Or that the "airbnb" entry in extends requires the eslint-config-airbnb dependency.
  • The storybook plugin understands that core.builder: 'webpack5' in main.js means that the @storybook/builder-webpack5 and @storybook/manager-webpack5 dependencies are required.
  • Static configuration files such as JSON and YAML always require a custom dependency resolver.

Custom dependency resolvers return all referenced dependencies for the configuration files it is given. Knip handles the rest to find which of those dependencies are unused or missing.


Other configuration files use require or import statements to use dependencies, so they can be analyzed like the rest of the source files. These configuration files are also considered entry files.

For plugins related to test files, it's good to know that the following glob patterns are always included by default (see TEST_FILE_PATTERNS in constants.ts):

  • **/*.{test,spec}.{js,jsx,ts,tsx,mjs,cjs}
  • **/__tests__/**/*.{js,jsx,ts,tsx,mjs,cjs}
  • test/**/*.{js,jsx,ts,tsx,mjs,cjs}

Disable a plugin

In case a plugin causes issues, it can be disabled by using false as its value (e.g. "webpack": false).

Create a new plugin

Getting false positives because a plugin is missing? Want to help out? Feel free to add your own plugin! Here's how to get started:

npm run create-plugin -- --name [myplugin]

Production Mode

The default mode for Knip is holistic and targets all project code, including configuration files and tests. Test files usually import production files. This prevents the production files or its exports from being reported as unused, while sometimes both of them can be removed. This is why Knip has a "production mode".

To tell Knip what is production code, add an exclamation mark behind each pattern! that is meant for production and use the --production flag. Here's an example:

  "entry": ["src/index.ts!", "build/script.js"],
  "project": ["src/**/*.ts!", "build/*.js"]

Here's what's included in production mode analysis:

  • Only entry and project patterns suffixed with !.
  • Only entry patterns from plugins exported as PRODUCTION_ENTRY_FILE_PATTERNS (such as Next.js and Gatsby).
  • Only the postinstall and start script (e.g. not the test or other npm scripts in package.json).
  • Only exports, nsExports and classMembers are included in the report (types, nsTypes, enumMembers are ignored).


Additionally, the --strict flag can be used to:

  • Consider only dependencies (not devDependencies) when finding unused or unlisted dependencies.
  • Consider only non-type imports (i.e. ignore import type {}).
  • Assume each workspace is self-contained: they have their own dependencies (and not rely on packages of ancestor workspaces).


Plugins also have this distinction. For instance, Next.js entry files for pages (pages/**/*.tsx) and Remix routes (app/routes/**/*.tsx) are production code, while Jest and Playwright entry files (e.g. *.spec.ts) are not. All of this is handled automatically by Knip and its plugins. You only need to point Knip to additional files or custom file locations. The more plugins Knip will have, the more projects can be analyzed out of the box!


Tools like TypeScript, Webpack and Babel support import aliases in various ways. Knip automatically includes compilerOptions.paths from the TypeScript configuration, but does not (yet) automatically find other types of import aliases. They can be configured manually:

  "$schema": "",
  "paths": {
    "@lib": ["./lib/index.ts"],
    "@lib/*": ["./lib/*"]

Each workspace can also have its own paths configured. Note that Knip paths follow the TypeScript semantics:

  • Path values is an array of relative paths.
  • Paths without an * are exact matches.


Knip provides the following built-in reporters:

The compact reporter shows the sorted files first, and then a list of symbols:

example output of dependencies

Custom Reporters

When the provided built-in reporters are not sufficient, a custom reporter can be implemented.

Pass --reporter ./my-reporter, with the default export of that module having this interface:

type Reporter = (options: ReporterOptions) => void;

type ReporterOptions = {
  report: Report;
  issues: Issues;
  cwd: string;
  workingDir: string;
  isProduction: boolean;
  options: string;

The data can then be used to write issues to stdout, a JSON or CSV file, or sent to a service.

Find more details and ideas in custom reporters.

Libraries and "unused" exports

Libraries and applications are identical when it comes to files and dependencies: whatever is unused should be removed. Yet libraries usually have exports meant to be used by other libraries or applications. Such public variables and types in libraries can be marked with the JSDoc @public tag:

 * Merge two objects.
 * @public

export const merge = function () {};

Knip does not report public exports and types as unused.


Really, another unused file/dependency/export finder?

There are already some great packages available if you want to find unused dependencies OR unused exports.

I love the Unix philosophy ("do one thing well"). But in this case I believe it's efficient to handle multiple concerns in a single tool. When building a dependency graph of the project, an abstract syntax tree for each file, and traversing all of this, why not collect the various issues in one go?

Why so much configuration?

The structure and configuration of projects and their dependencies vary wildly, and no matter how well-balanced, defaults only get you so far. Some implementations and some tools out there have smart or unconventional ways to import code, making things more complicated. That's why Knip tends to require more configuration in larger projects, based on how many dependencies are used and how much the configuration in the project diverges from the defaults.

One important goal of Knip is to minimize the amount of configuration necessary. When you false positives are reported and you think there are feasible ways to infer things automatically, reducing the amount of configuration, please open an issue.

How do I handle too many output/false positives?

Too many unused files

When the list of unused files is too long, this means the gap between the set of entry and the set of project files needs tweaking. The gap can be narrowed down by increasing the entry files or reducing the project files, for instance by ignoring specific folders that are not related to the source code imported by the entry files.

Too many unused dependencies

Dependencies that are only imported in unused files are also marked as unused. So a long list of unused files would be good to remedy first.

When unused dependencies are related to dependencies having a Knip plugin, maybe the config and/or entry files for that dependency are at custom locations. The default values are at the plugin's documentation, and can be overridden to match the custom location(s).

When the dependencies don't have a Knip plugin yet, please file an issue or create a new plugin.

Too many unused exports

When the project is a library and the exports are meant to be used by consumers of the library, there are two options:

  1. By default, unused exports of entry files are not reported. You could re-export from an existing entry file, or add the containing file to the entry array in the configuration.
  2. The exported values or types can be marked using the JSDoc @public tag.

How to start using Knip in CI while having too many issues to sort out?

Eventually this type of QA only really works when it's tied to an automated workflow. But with too many issues to resolve this might not be feasible right away, especially in existing larger codebase. Here are a few options that may help:

  • Use --no-exit-code for exit code 0 in CI.
  • Use --include (or --exclude) to report only the issue types that have little or no errors.
  • Use a separate --dependencies and/or --exports Knip command.
  • Use ignore (for files and directories) and ignoreDependencies to filter out some problematic areas.
  • Limit the number of workspaces configured to analyze in knip.json.

All of this is hiding problems, so please make sure to plan for fixing them and/or open issues here for false positives.


This table is an ongoing comparison. Based on their docs (please report any mistakes):

Unused files---
Unused dependencies--
Unlisted dependencies--
Unused exports--
Unused class members----
Unused enum members----
Duplicate exports--
Search namespaces--
Custom reporters----
JavaScript support--
Configure entry files
[Support workspaces/monorepos][52]--
ESLint plugin available----

✅ = Supported, ❌ = Not supported, - = Out of scope

Migrating from other tools


The following commands are similar:

knip --dependencies


The following commands are similar:

knip --production --dependencies --include files

Also see production mode.


The following commands are similar:

knip --include exports,types,nsExports,nsTypes
knip --exports  # Adds unused enum and class members


The following commands are similar:

knip --include exports,types
knip --exports  # Adds unused exports/types in namespaces and unused enum/class members

TypeScript language services

TypeScript language services could play a major role in most of the "unused" areas, as they have an overview of the project as a whole. This powers things in VS Code like "Find references" or the "Module "./some" declares 'Thing' locally, but it is not exported" message. I think features like "duplicate exports" or "custom dependency resolvers" are userland territory, much like code linters.


Knip is Dutch for a "cut". A Dutch expression is "to be geknipt for something", which means to be perfectly suited for the job. I'm motivated to make knip perfectly suited for the job of cutting projects to perfection! ✂️

Download Details:

Author: Webpro
Source Code: 
License: ISC license

#typescript #lint #dependency #analysis #maintenance

Knip: Find Unused Files, Dependencies & Exports in Your JS/TS Project
Nat  Grady

Nat Grady


Poetry: Python Packaging and Dependency Management Made Easy

Poetry: Python packaging and dependency management made easy

Poetry helps you declare, manage and install dependencies of Python projects, ensuring you have the right stack everywhere.

Poetry Install

Poetry replaces, requirements.txt, setup.cfg, and Pipfile with a simple pyproject.toml based project format.

name = "my-package"
version = "0.1.0"
description = "The description of the package"

license = "MIT"

authors = [
    "Sébastien Eustace <>"

repository = ""
homepage = ""

# README file(s) are used as the package description
readme = ["", "LICENSE"]

# Keywords (translated to tags on the package index)
keywords = ["packaging", "poetry"]

# Compatible Python versions
python = ">=3.8"
# Standard dependency with semver constraints
aiohttp = "^3.8.1"
# Dependency with extras
requests = { version = "^2.28", extras = ["security"] }
# Version-specific dependencies with prereleases allowed
tomli = { version = "^2.0.1", python = "<3.11", allow-prereleases = true }
# Git dependencies
cleo = { git = "", branch = "master" }
# Optional dependencies (installed by extras)
pendulum = { version = "^2.1.2", optional = true }

# Dependency groups are supported for organizing your dependencies
pytest = "^7.1.2"
pytest-cov = "^3.0"

# ...and can be installed only when explicitly requested
optional = true
Sphinx = "^5.1.1"

# Python-style entrypoints and scripts are easily expressed
my-script = "my_package:main"


Poetry supports multiple installation methods, including a simple script found at For full installation instructions, including advanced usage of the script, alternate install methods, and CI best practices, see the full installation documentation.


Documentation for the current version of Poetry (as well as the development branch and recently out of support versions) is available from the official website.


Poetry is a large, complex project always in need of contributors. For those new to the project, a list of suggested issues to work on in Poetry and poetry-core is available. The full contributing documentation also provides helpful guidance.



Official Website


Issue Tracker


Related Projects

  • poetry-core: PEP 517 build-system for Poetry projects, and dependency-free core functionality of the Poetry frontend
  • poetry-plugin-export: Export Poetry projects/lock files to foreign formats like requirements.txt
  • poetry-plugin-bundle: Install Poetry projects/lock files to external formats like virtual environments
  • The official Poetry installation script
  • website: The official Poetry website and blog

Download Details:

Author: Python-poetry
Source Code: 
License: MIT license

#python #package #managed #dependency 

Poetry: Python Packaging and Dependency Management Made Easy
Lawrence  Lesch

Lawrence Lesch


TSlib: Runtime Library for TypeScript Helpers


This is a runtime library for TypeScript that contains all of the TypeScript helper functions.

This library is primarily used by the --importHelpers flag in TypeScript. When using --importHelpers, a module that uses helper functions like __extends and __assign in the following emitted file:

var __assign = (this && this.__assign) || Object.assign || function(t) {
    for (var s, i = 1, n = arguments.length; i < n; i++) {
        s = arguments[i];
        for (var p in s) if (, p))
            t[p] = s[p];
    return t;
exports.x = {};
exports.y = __assign({}, exports.x);

will instead be emitted as something like the following:

var tslib_1 = require("tslib");
exports.x = {};
exports.y = tslib_1.__assign({}, exports.x);

Because this can avoid duplicate declarations of things like __extends, __assign, etc., this means delivering users smaller files on average, as well as less runtime overhead. For optimized bundles with TypeScript, you should absolutely consider using tslib and --importHelpers.


For the latest stable version, run:


# TypeScript 3.9.2 or later
npm install tslib

# TypeScript 3.8.4 or earlier
npm install tslib@^1

# TypeScript 2.3.2 or earlier
npm install tslib@1.6.1


# TypeScript 3.9.2 or later
yarn add tslib

# TypeScript 3.8.4 or earlier
yarn add tslib@^1

# TypeScript 2.3.2 or earlier
yarn add tslib@1.6.1


# TypeScript 3.9.2 or later
bower install tslib

# TypeScript 3.8.4 or earlier
bower install tslib@^1

# TypeScript 2.3.2 or earlier
bower install tslib@1.6.1


# TypeScript 3.9.2 or later
jspm install tslib

# TypeScript 3.8.4 or earlier
jspm install tslib@^1

# TypeScript 2.3.2 or earlier
jspm install tslib@1.6.1


Set the importHelpers compiler option on the command line:

tsc --importHelpers file.ts

or in your tsconfig.json:

    "compilerOptions": {
        "importHelpers": true

For bower and JSPM users

You will need to add a paths mapping for tslib, e.g. For Bower users:

    "compilerOptions": {
        "module": "amd",
        "importHelpers": true,
        "baseUrl": "./",
        "paths": {
            "tslib" : ["bower_components/tslib/tslib.d.ts"]

For JSPM users:

    "compilerOptions": {
        "module": "system",
        "importHelpers": true,
        "baseUrl": "./",
        "paths": {
            "tslib" : ["jspm_packages/npm/tslib@2.x.y/tslib.d.ts"]


  • Choose your new version number
  • Set it in package.json and bower.json
  • Create a tag: git tag [version]
  • Push the tag: git push --tags
  • Create a release in GitHub
  • Run the publish to npm workflow



There are many ways to contribute to TypeScript.


Download Details:

Author: Microsoft
Source Code: 
License: 0BSD license

#typescript #optimize #import #dependency 

TSlib: Runtime Library for TypeScript Helpers
Rupert  Beatty

Rupert Beatty


Rome: Makes It Easy to Build A List Of Frameworks


Rome makes it easy to build a list of frameworks for consumption outside of Xcode, e.g. for a Swift script.


$ gem install cocoapods-rome


In the examples below the target 'caesar' could either be an existing target of a project managed by cocapods for which you'd like to run a swift script or it could be fictitious, for example if you wish to run this on a standalone Podfile and get the frameworks you need for adding to your xcode project manually.


Write a simple Podfile, like this:


platform :osx, '10.10'

plugin 'cocoapods-rome'

target 'caesar' do
  pod 'Alamofire'


platform :ios, '8.0'

plugin 'cocoapods-rome', { :pre_compile => { |installer|
    installer.pods_project.targets.each do |target|
        target.build_configurations.each do |config|
            config.build_settings['SWIFT_VERSION'] = '4.0'

    dsym: false,
    configuration: 'Release'

target 'caesar' do
  pod 'Alamofire'

then run this:

pod install

and you will end up with dynamic frameworks:

$ tree Rome/
└── Alamofire.framework

Advanced Usage

For your production builds, when you want dSYMs created and stored:

platform :osx, '10.10'

plugin 'cocoapods-rome', {
  dsym: true,
  configuration: 'Release'

target 'caesar' do
  pod 'Alamofire'

Resulting in:

$ tree dSYM/
├── iphoneos
│   └── Alamofire.framework.dSYM
│       └── Contents
│           ├── Info.plist
│           └── Resources
│               └── DWARF
│                   └── Alamofire
└── iphonesimulator
    └── Alamofire.framework.dSYM
        └── Contents
            ├── Info.plist
            └── Resources
                └── DWARF
                    └── Alamofire


The plugin allows you to provides hooks that will be called during the installation process.


This hook allows you to make any last changes to the generated Xcode project before the compilation of frameworks begins.

It receives the Pod::Installer as its only argument.


This hook allows you to run code after the compilation of the frameworks finished and they have been moved to the Rome folder.

It receives the Pod::Installer as its only argument.


Customising the Swift version of all pods

platform :osx, '10.10'

plugin 'cocoapods-rome', 
    :pre_compile => { |installer|
        installer.pods_project.targets.each do |target|
            target.build_configurations.each do |config|
                config.build_settings['SWIFT_VERSION'] = '4.0'
    :post_compile => { |installer|
        puts "Rome finished building all the frameworks"

target 'caesar' do
    pod 'Alamofire'

Download Details:

Author: CocoaPods
Source Code: 
License: MIT license

#swift #dependency #managed #objective-c 

Rome: Makes It Easy to Build A List Of Frameworks
Rupert  Beatty

Rupert Beatty


Objective-C and Swift Dependency Visualizer

Objective-C And Swift Dependencies Visualizer

This is the tool, that can use .o(object) files to generate dependency graph.
All visualisations was done by d3js library, which is just awesome!
This tool was made just for fun, but images can show how big your project is, how many classes it have, and how they linked to each other

Image example

Easiest way - For those who don't like to read docs

This will clone project, and run it on the latest modified project

git clone ;
cd objc-dependency-visualizer ;
./generate-objc-dependencies-to-json.rb -d -s "" > origin.js ;
open index.html

Easiest way for Swift projects

git clone ;
cd objc-dependency-visualizer ;
./generate-objc-dependencies-to-json.rb -w -s "" > origin.js ;
open index.html

More specific examples

Examples are here

Tell the world about the awesomeness of your project structure

Share image to the Twitter with #objcdependencyvisualizer hashtag

Hard way - or "I want to read what I'm doing!"

Here's detailed description of what's going on under the hood

Download Details:

Author: PaulTaykalo
Source Code: 
License: MIT license

#swift #dependency #javascript #objective-c #graph 

Objective-C and Swift Dependency Visualizer

Composer: Dependency Manager for PHP

Composer - Dependency Management for PHP

Composer helps you declare, manage, and install dependencies of PHP projects.

See for more information and documentation.

Installation / Usage

Download and install Composer by following the official instructions.

For usage, see the documentation.


Find public packages on

For private package hosting take a look at Private Packagist.


Follow @packagist or @seldaek on Twitter for announcements, or check the #composerphp hashtag.

For support, Stack Overflow offers a good collection of Composer related questions, or you can use the GitHub discussions.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project and its community you agree to abide by those terms.


Latest Composer

PHP 7.2.5 or above for the latest version.

Composer 2.2 LTS (Long Term Support)

PHP versions 5.3.2 - 8.1 are still supported via the LTS releases of Composer (2.2.x). If you run the installer or the self-update command the appropriate Composer version for your PHP should be automatically selected.


See also the list of contributors who participated in this project.

Security Reports

Please send any sensitive issue to Thanks!

Download Details:

Author: Composer
Source Code: 
License: MIT license

#php #composer #dependency 

Composer: Dependency Manager for PHP
Elian  Harber

Elian Harber


Wire: Compile-time Dependency injection for Go

Wire: Automated Initialization in Go

Wire is a code generation tool that automates connecting components using dependency injection. Dependencies between components are represented in Wire as function parameters, encouraging explicit initialization instead of global variables. Because Wire operates without runtime state or reflection, code written to be used with Wire is useful even for hand-written initialization.

For an overview, see the introductory blog post.


Install Wire by running:

go install

and ensuring that $GOPATH/bin is added to your $PATH.


Project status

As of version v0.3.0, Wire is beta and is considered feature complete. It works well for the tasks it was designed to perform, and we prefer to keep it as simple as possible.

We'll not be accepting new features at this time, but will gladly accept bug reports and fixes.


For questions, please use GitHub Discussions.

This project is covered by the Go Code of Conduct.

Download Details:

Author: Google
Source Code: 
License: Apache-2.0 license

#go #golang #dependency #injection 

Wire: Compile-time Dependency injection for Go

10 Best PHP Libraries for Extras Related to Dependency Management

In today's post we will learn about 10 Best PHP Libraries for Extras Related to Dependency Management.

What is Dependency Management?

Projects in all professional services industries often require a cooperation of individuals or services with specialist skills in order to complete the work and deliver it to a client. Dependency management is a technique, a set of actions to perform in order to properly plan, manage and conduct a project between different shared services, or specialists whose availability needs to be ensured for successful completion of the work.  

Table of contents:

  • Composed - A library to parse your project's Composer environment at runtime.
  • Composer Merge Plugin - A composer plugin to merge several composer.json files.
  • Composer Normalize - A plugin for normalising composer.json files.
  • Composer Patches - A plugin for Composer to apply patches.
  • Composer Require Checker - CLI tool to analyze composer dependencies and verify that no unknown symbols are used in the sources of a package.
  • Composer Unused - A CLI Tool to scan for unused composer packages.
  • Prestissimo - A composer plugin which enables parallel install process.
  • Satis - A static Composer repository generator.
  • Tooly - A library to manage PHAR files in project using Composer.

1 - Composed:

A library to parse your project's Composer environment at runtime.

This library provides a set of utility functions designed to help you parse your project's Composer configuration, and those of its dependencies, at runtime.


The API combines functional and object-oriented approaches.

Locate the vendor directory

(Chicken and egg...)

$absoluteVendorPath = Composed\VENDOR_DIR;

Locate the project's base directory

$absoluteProjectPath = Composed\BASE_DIR;

Get the authors of a specific package

You can fetch data from the composer.json file of a specific package.

$authors = Composed\package_config('phpunit/phpunit', 'authors');

assert($authors === [
        'name' => "Sebastian Bergmann",
        'email' => "",
        'role' => "lead",

Get licenses of all installed packages

You can fetch data from all composer.json files in your project in one go.

$licenses = Composed\package_configs('license');

assert($licenses === [
  'joshdifabio/composed' => "MIT",
  'doctrine/instantiator' => "MIT",
  'phpunit/php-code-coverage' => "BSD-3-Clause",

Get the absolute path to a file in a package

$path = Composed\package('phpunit/phpunit')->getPath('composer.json');

Get all packages installed on your project

foreach (Composed\packages() as $packageName => $package) {
    $pathToPackageConfig = $package->getPath('composer.json');
    // ...

Get data from your project's Composer config

You can also fetch data from the composer.json file located in your project root.

$projectAuthors = Composed\project_config('authors');

assert($projectAuthors === [
        'name' => 'Josh Di Fabio',
        'email' => '',


Install Composed using composer.

composer require joshdifabio/composed

View on Github

2 - Composer Merge Plugin:

A composer plugin to merge several composer.json files.

Merge multiple composer.json files at Composer runtime.

Composer Merge Plugin is intended to allow easier dependency management for applications which ship a composer.json file and expect some deployments to install additional Composer managed libraries. It does this by allowing the application's top level composer.json file to provide a list of optional additional configuration files. When Composer is run it will parse these files and merge their configuration settings into the base configuration. This combined configuration will then be used when downloading additional libraries and generating the autoloader.

Composer Merge Plugin was created to help with installation of MediaWiki which has core library requirements as well as optional libraries and extensions which may be managed via Composer.


Composer Merge Plugin 1.4.x (and older) requires Composer 1.x.

Composer Merge Plugin 2.0.x (and newer) is compatible with both Composer 2.x and 1.x.

$ composer require wikimedia/composer-merge-plugin


    "require": {
        "wikimedia/composer-merge-plugin": "dev-master"
    "extra": {
        "merge-plugin": {
            "include": [
            "require": [
            "recurse": true,
            "replace": false,
            "ignore-duplicates": false,
            "merge-dev": true,
            "merge-extra": false,
            "merge-extra-deep": false,
            "merge-replace": true,
            "merge-scripts": false

Updating sub-levels composer.json files

In order for Composer Merge Plugin to install dependencies from updated or newly created sub-level composer.json files in your project you need to run the command:

$ composer update

This will instruct Composer to recalculate the file hash for the top-level composer.json thus triggering Composer Merge Plugin to look for the sub-level configuration files and update your dependencies.

View on Github

3 - Composer Normalize:

A plugin for normalising composer.json files.


When it comes to formatting composer.json, you have the following options:

  • you can stop caring
  • you can format it manually (and request changes when contributors format it differently)
  • you can use ergebnis/composer-normalize

ergebnis/composer-normalize normalizes composer.json, so you don't have to.

💡 If you want to find out more, take a look at the examples and read this blog post.




composer require --dev ergebnis/composer-normalize

to install ergebnis/composer-normalize as a composer plugin.


composer config allow-plugins.ergebnis/composer-normalize true

to allow ergebnis/composer-normalize to run as a composer plugin.

💡 The allow-plugins has been added to composer/composer to add an extra layer of security.

For reference, see


Head over to and download the latest composer-normalize.phar.


chmod +x composer-normalize.phar

to make the downloaded composer-normalize.phar executable.



phive install ergebnis/composer-normalize

to install ergebnis/composer-normalize with PHIVE.




composer normalize

to normalize composer.json in the working directory.




to normalize composer.json in the working directory.




to normalize composer.json in the working directory.

View on Github

4 - Composer Patches:

A plugin for Composer to apply patches.

Simple patches plugin for Composer. Applies a patch from a local or remote file to any package required with composer.

Support notes

  • If you need PHP 5.3, 5.4, or 5.5 support, you should probably use a 1.x release.
  • 1.x is mostly unsupported, but bugfixes and security fixes will still be accepted. 1.7.0 will be the last minor release in the 1.x series.
  • Beginning in 2.x, the automated tests will not allow us to use language features that will cause syntax errors in PHP 5.6 and later. The unit/acceptance tests do not run on anything earlier than PHP 7.1, so while pull requests will be accepted for those versions, support is on a best-effort basis.


Before you begin, make sure that patch is installed on your system.

Example composer.json:

  "require": {
    "cweagans/composer-patches": "~1.0",
    "drupal/core-recommended": "^8.8",
  "config": {
    "preferred-install": "source"
  "extra": {
    "patches": {
      "drupal/core": {
        "Add startup configuration for PHP server": ""

Using an external patch file

Instead of a patches key in your root composer.json, use a patches-file key.

  "require": {
    "cweagans/composer-patches": "~1.0",
    "drupal/core-recommended": "^8.8",
  "config": {
    "preferred-install": "source"
  "extra": {
    "patches-file": "local/path/to/your/composer.patches.json"

Then your composer.patches.json should look like this:

  "patches": {
    "vendor/project": {
      "Patch title": ""

Allowing patches to be applied from dependencies

If you want your project to accept patches from dependencies, you must have the following in your composer file:

  "require": {
      "cweagans/composer-patches": "^1.5.0"
  "extra": {
      "enable-patching": true

View on Github

5 - Composer Require Checker:

CLI tool to analyze composer dependencies and verify that no unknown symbols are used in the sources of a package.

A CLI tool to analyze composer dependencies and verify that no unknown symbols are used in the sources of a package. This will prevent you from using "soft" dependencies that are not defined within your composer.json require section.

What's it about?

"Soft" (or transitive) dependencies are code that you did not explicitly define to be there but use it nonetheless. The opposite is a "hard" (or direct) dependency.

Your code most certainly uses external dependencies. Imagine that you found a library to access a remote API. You require thatvendor/api-lib for your software and use it in your code. This library is a hard dependency.

Then you see that another remote API is available, but no library exists. The use case is simple, so you look around and find that guzzlehttp/guzzle (or any other HTTP client library) is already installed, and you use it right away to fetch some info. Guzzle just became a soft dependency.

Then someday, when you update your dependencies, your access to the second API breaks. Why? Turns out that the reason guzzlehttp/guzzle was installed is that it is a dependency of thatvendor/api-lib you included, and their developers decided to update from an earlier major version to the latest and greatest, simply stating in their changelog: "Version 3.1.0 uses the latest major version of Guzzle - no breaking changes expected."

And you think: What about my broken code?

ComposerRequireChecker parses your code and your composer.json-file to see whether your code uses symbols that are not declared as a required library, i.e. that are soft dependencies. If you rely on components that are already installed but didn't explicitly request them, this tool will complain about them and you should require them explicitly, making them hard dependencies. This will prevent unexpected updates.

In the situation above you wouldn't get the latest update of thatvendor/api-lib, but your code would continue to work if you also required guzzlehttp/guzzle before the update.

The tool will also check for usage of PHP functions that are only available if an extension is installed, and will complain if that extension isn't explicitly required.

Installation / Usage

ComposerRequireChecker is not supposed to be installed as part of your project dependencies.

PHAR file [preferred]

Please check the releases for available PHAR files. Download the latest release and run it like this:

php composer-require-checker.phar check /path/to/your/project/composer.json


If you already use PHIVE to install and manage your project’s tooling, then you should be able to simply install ComposerRequireChecker like this:

phive install composer-require-checker

Composer - global command

This package can be easily globally installed by using Composer:

composer global require maglnet/composer-require-checker

If you haven't already setup your composer installation to support global requirements, please refer to the Composer CLI - global If this is already done, run it like this:

composer-require-checker check /path/to/your/project/composer.json

A note about Xdebug

If your PHP is including Xdebug when running ComposerRequireChecker, you may experience additional issues like exceeding the Xdebug-related max-nesting-level - and on top, Xdebug slows PHP down.

It is recommended to run ComposerRequireChecker without Xdebug.

If you cannot provide a PHP instance without Xdebug yourself, try setting an environment variable like this for just the command: XDEBUG_MODE=off php composer-require-checker.

View on Github

6 - Composer Unused:

A CLI Tool to scan for unused composer packages.


When working in a big repository, you sometimes lose track of your required Composer packages. There may be so many packages you can't be sure if they are actually used or not.

Unfortunately, the composer why command only gives you the information about why a package is installed in dependency to another package.

How do we check whether the provided symbols of a package are used in our code?

composer unused to the rescue!


⚠️ This tool heavily depends on certain versions of its dependencies. A local installation of this tool is not recommended as it might not work as intended or can't be installed correctly. We do recommened you download the .phar archive or use PHIVE to install it locally.

PHAR (PHP Archive) (recommended)

Install via phive or grab the latest composer-unused.phar from the latest release:

phive install composer-unused
curl -OL


You can also install composer-unused as a local development dependency:

composer require --dev icanhazstring/composer-unused


Depending on the kind of your installation the command might differ.

Note: Packages must be installed via composer install or composer update prior to running composer-unused.


The phar archive can be run directly in you project:

php composer-unused.phar


Having composer-unused as a local dependency you can run it using the shipped binary:


Exclude folders and packages

Sometimes you don't want to scan a certain directory or ignore a Composer package while scanning. In these cases, you can provide the --excludeDir or the --excludePackage option. These options accept multiple values as shown next:

php composer-unused.phar --excludeDir=config --excludePackage=symfony/console
php composer-unused.phar \
    --excludeDir=bin \
    --excludeDir=config \
    --excludePackage=symfony/assets \

Make sure the package is named exactly as in your composer.json

View on Github

7 - Prestissimo:

A composer plugin which enables parallel install process.


  • composer >=1.0.0 <2.0
  • PHP >=5.3, (suggest >=5.5, because curl_share_init)
  • ext-curl

Install, Updating & Upgrading

$ composer global require hirak/prestissimo


$ composer global remove hirak/prestissimo

Benchmark Example

288s -> 26s

$ composer create-project laravel/laravel laravel1 --no-progress --profile --prefer-dist



prestissimo ^0.3.x

Recognize composer's options. You don't need to set any special configuration.

Composer authentication

To avoid Composer asking for authentication it is recommended to follow the procedure on composer's authentication.

For you could also use an auth.json file with an oauth access token placed on the the same level as your composer.json file:

    "github-oauth": {

View on Github

8 - Satis:

A static Composer repository generator.

Run from source

Satis requires a recent PHP version, it does not run with unsupported PHP versions. Check the composer.json file for details.

  • Install satis: composer create-project composer/satis:dev-main
  • Build a repository: php bin/satis build <configuration-file> <output-directory>

Read the more detailed instructions in the documentation.

Run as Docker container

Pull the image:

docker pull composer/satis

Run the image (with Composer cache from host):

docker run --rm --init -it \
  --user $(id -u):$(id -g) \
  --volume $(pwd):/build \
  --volume "${COMPOSER_HOME:-$HOME/.composer}:/composer" \
  composer/satis build <configuration-file> <output-directory>

If you want to run the image without implicitly running Satis, you have to override the entrypoint specified in the Dockerfile:

--entrypoint /bin/sh


If you choose to archive packages as part of your build, over time you can be left with useless files. With the purge command, you can delete these files.

php bin/satis purge <configuration-file> <output-dir>

Note: don't do this unless you are certain your projects no longer reference any of these archives in their composer.lock files.

View on Github

9 - Tooly:

A library to manage PHAR files in project using Composer.

With tooly composer-script you can version needed PHAR files in your project's composer.json without adding them directly to a VCS,

  • to save disk space at vcs repository
  • to be sure that all developers in your project get the required toolchain
  • to prepare a CI/CD System
  • (optional) to automatically check the GPG signature verification for each tool

Every PHAR file will be saved in the composer binary directory.


An real example can be found here.


  • PHP >= 5.6
  • Composer


To use the script execute the following command:

composer require --dev tm/tooly-composer-script

Then add the script in the composer.json under "scripts" with the event names you want to trigger. For example:

"scripts": {
    "post-install-cmd": "Tooly\\ScriptHandler::installPharTools",
    "post-update-cmd": "Tooly\\ScriptHandler::installPharTools"

Look here for more informations about composer events.

Sample usage

The composer.json scheme has a part "extra" which is used for the script. Its described here.

In this part you can add your needed phar tools under the key "tools".

"extra": {
    "tools": {
      "phpunit": {
        "url": "",
        "sign-url": ""
      "phpcpd": {
        "url": "",
        "only-dev": true,
        "rename": true
      "security-checker": {
        "url": "",
        "force-replace": true


url (required)

After you add the name of the tool as key, you need only one further parameter. The "url". The url can be a link to a specific version, such as x.y.z, or a link to the latest version for this phar.

rename (optional, default false)

Rename the downloaded tool to the name that is used as key.

sign-url (optional, default none)

If this parameter is set tooly checks if the PHAR file in url has a valid signature by comparing signature in sign-url.

This option is useful if you want to be sure that the tool is from the expected author.

Note: For the check you need a further requirement and a GPG binary in your $PATH variable.

You can add the requirement with this command: composer require tm/gpg-verifier

This check often fails if you dont has the public key from the tool author in your GPG keychain.

View on Github

Thank you for following this article.

Related videos:

The Most Popular PHP Frameworks to Use in 2022

#php #dependency #management 

10 Best PHP Libraries for Extras Related to Dependency Management

10 Popular Go Libraries for Package & Dependency Management

In today's post we will learn about 10 Popular Go Libraries for Package & Dependency Management.

What is dependency management?

Dependency management is the process of managing all of these interrelated tasks and resources to ensure that your overall project completes successfully, on time, and on budget. When there are dependencies that need to be managed between projects, it's referred to as project interdependency management.

Table of contents:

  • Glide - Manage your golang vendor and vendored packages with ease. Inspired by tools like Maven, Bundler, and Pip.
  • Godep - dependency tool for go, godep helps build packages reproducibly by fixing their dependencies.
  • Gom - Go Manager - bundle for go.
  • Goop - Simple dependency manager for Go (golang), inspired by Bundler.
  • Gop - Build and manage your Go applications out of GOPATH.
  • Gopm - Go Package Manager.
  • MVN-golang - Plugin that provides way for auto-loading of Golang SDK, dependency management and start build environment in Maven project infrastructure.
  • Gpm - Barebones dependency manager for Go.
  • Johnny-deps - Minimal dependency version using Git.
  • Modgv - Converts 'go mod graph' output into Graphviz's DOT language.

1 - Glide:

Manage your golang vendor and vendored packages with ease. Inspired by tools like Maven, Bundler, and Pip.

Are you used to tools such as Cargo, npm, Composer, Nuget, Pip, Maven, Bundler, or other modern package managers? If so, Glide is the comparable Go tool.

Manage your vendor and vendored packages with ease. Glide is a tool for managing the vendor directory within a Go package. This feature, first introduced in Go 1.5, allows each package to have a vendor directory containing dependent packages for the project. These vendor packages can be installed by a tool (e.g. glide), similar to go get or they can be vendored and distributed with the package.

Go Modules

The Go community is now using Go Modules to handle dependencies. Please consider using that instead of Glide. Glide is now mostly unmaintained.


  • Ease dependency management
  • Support versioning packages including Semantic Versioning 2.0.0 support. Any constraint the package can parse can be used.
  • Support aliasing packages (e.g. for working with github forks)
  • Remove the need for munging import statements
  • Work with all of the go tools
  • Support the VCS tools that Go supports:
    • git
    • bzr
    • hg
    • svn
  • Support custom local and global plugins (see docs/
  • Repository caching and data caching for improved performance.
  • Flatten dependencies resolving version differences and avoiding the inclusion of a package multiple times.
  • Manage and install dependencies on-demand or vendored in your version control system.

How It Works

Glide scans the source code of your application or library to determine the needed dependencies. To determine the versions and locations (such as aliases for forks) Glide reads a glide.yaml file with the rules. With this information Glide retrieves needed dependencies.

When a dependent package is encountered its imports are scanned to determine dependencies of dependencies (transitive dependencies). If the dependent project contains a glide.yaml file that information is used to help determine the dependency rules when fetching from a location or version to use. Configuration from Godep, GB, GOM, and GPM is also imported.

The dependencies are exported to the vendor/ directory where the go tools can find and use them. A glide.lock file is generated containing all the dependencies, including transitive ones.

The glide init command can be use to setup a new project, glide update regenerates the dependency versions using scanning and rules, and glide install will install the versions listed in the glide.lock file, skipping scanning, unless the glide.lock file is not found in which case it will perform an update.

A project is structured like this:

- $GOPATH/src/myProject (Your project)
  |-- glide.yaml
  |-- glide.lock
  |-- main.go (Your main go code can live here)
  |-- mySubpackage (You can create your own subpackages, too)
  |    |
  |    |-- foo.go
  |-- vendor
            |-- Masterminds
                  |-- ... etc.

Take a look at the Glide source code to see this philosophy in action.


The easiest way to install the latest release on Mac or Linux is with the following script:

curl | sh

On Mac OS X you can also install the latest release via Homebrew:

$ brew install glide

On Ubuntu Precise (12.04), Trusty (14.04), Wily (15.10) or Xenial (16.04) you can install from our PPA:

sudo add-apt-repository ppa:masterminds/glide && sudo apt-get update
sudo apt-get install glide

On Ubuntu Zesty (17.04) the package is called golang-glide.

Binary packages are available for Mac, Linux and Windows.

For a development version it is also possible to go get

To build from source you can:

  1. Clone this repository into $GOPATH/src/ and change directory into it
  2. If you are using Go 1.5 ensure the environment variable GO15VENDOREXPERIMENT is set, for example by running export GO15VENDOREXPERIMENT=1. In Go 1.6 it is enabled by default and in Go 1.7 it is always enabled without the ability to turn it off.
  3. Run make build

This will leave you with ./glide, which you can put in your $PATH if you'd like. (You can also take a look at make install to install for you.)

The Glide repo has now been configured to use glide to manage itself, too.

View on Github

2 - Godep:

Dependency tool for go, godep helps build packages reproducibly by fixing their dependencies.

Golang Dep

The Go community now has the dep project to manage dependencies. Please consider trying to migrate from Godep to dep. If there is an issue preventing you from migrating please file an issue with dep so the problem can be corrected. Godep will continue to be supported for some time but is considered to be in a state of support rather than active feature development.


go get

How to use godep with a new project

Assuming you've got everything working already, so you can build your project with go install and test it with go test, it's one command to start using:

godep save

This will save a list of dependencies to the file Godeps/Godeps.json and copy their source code into vendor/ (or Godeps/_workspace/ when using older versions of Go). Godep does not copy:

  • files from source repositories that are not tracked in version control.
  • *_test.go files.
  • testdata directories.
  • files outside of the go packages.

Godep does not process the imports of .go files with either the ignore or appengine build tags.

Test files and testdata directories can be saved by adding -t.

Read over the contents of vendor/ and make sure it looks reasonable. Then commit the Godeps/ and vendor/ directories to version control.

The deprecated -r flag

For older versions of Go, the -r flag tells save to automatically rewrite package import paths. This allows your code to refer directly to the copied dependencies in Godeps/_workspace. So, a package C that depends on package D will actually import C/Godeps/_workspace/src/D. This makes C's repo self-contained and causes go get to build C with the right version of all dependencies.

If you don't use -r, when using older version of Go, then in order to use the fixed dependencies and get reproducible builds, you must make sure that every time you run a Go-related command, you wrap it in one of these two ways:

  • If the command you are running is just go, run it as godep go ..., e.g. godep go install -v ./...
  • When using a different command, set your $GOPATH using godep path as described below.

-r isn't necessary with go1.6+ and isn't allowed.

View on Github

3 - Gom: 

Go Manager - bundle for go.


The go get command is useful. But we want to fix the problem where package versions are different from the latest update. Are you going to do go get -tags=1.1 ..., go get -tag=0.3 for each of them? We want to freeze package version. Ruby's bundle is awesome.


go get


gom '', :tag => 'go1'
gom '', :commit => 'ecb144fb1f2848a24ebfdadf8e64380406d87206'
gom ''
gom '', :goos => 'windows'

# Execute only in the "test" environment.
group :test do
    gom ''

# Execute only for the "custom_group" group.
group :custom_group do
    gom ''

By default gom install install all packages, except those in the listed groups. You can install packages from groups based on the environment using flags (development, test & production) : gom -test install

Custom groups my be specified using the -groups flag : gom -test -groups=custom_group,special install


Create _vendor directory and bundle packages into it

gom install

Build on current directory with _vendor packages

gom build

Run tests on current directory with _vendor packages

gom test

Generate .travis.yml that uses gom test

gom gen travis-yml

You can always change the name relative to the current $GOPATH directory using an environment variable: GOM_VENDOR_NAME

$ # to use a regular $GOPATH/src folder you should specify GOM_VENDOR_NAME equal '.'
$ GOM_VENDOR_NAME=. gom <command>

View on Github

4 - Goop: 

Simple dependency manager for Go (golang), inspired by Bundler.

A dependency manager for Go (golang), inspired by Bundler. It is different from other dependency managers in that it does not force you to mess with your GOPATH.

Getting Started

Install Goop: go get

Create Goopfile. Revision reference (e.g. Git SHA hash) is optional, but recommended. Prefix hash with #. (This is to futureproof the file format.)

Example: #14f550f51af52180c2eefed15e5fd18d63c0a64a #v1.0.1 // comment ! // override repo url

Run goop install. This will install packages inside a subdirectory called .vendor and create Goopfile.lock, recording exact versions used for each package and its dependencies. Subsequent goop install runs will ignore Goopfile and install the versions specified in Goopfile.lock. You should check this file in to your source version control. It's a good idea to add .vendor to your version control system's ignore settings (e.g. .gitignore).

Run commands using goop exec (e.g. goop exec make). This will execute your command in an environment that has correct GOPATH and PATH set.

Go commands can be run without the exec keyword (e.g. goop go test).

Other commands

Run goop update to ignore an existing Goopfile.lock, and update to latest versions of packages (as specified in Goopfile).

Running eval $(goop env) will modify GOPATH and PATH in current shell session, allowing you to run commands without goop exec.


Goop currently only supports Git and Mercurial. This should be fine for 99% of the cases, but you are more than welcome to make a pull request that adds support for Subversion and Bazaar.

View on Github

5 - Gop:

Build and manage your Go applications out of GOPATH.

GOP is a project manangement tool for building your golang applications out of global GOPATH. In fact gop will keep both global GOPATH and every project GOPATH. But that means your project will not go-getable. Of course, GOP itself is go-getable. GOP copy all denpendencies from global GOPATH to your project's src/vendor directory and all application's sources are also in src directory.

A normal process using gop is below:

git clone
cd aaa
gop ensure -g
gop build
gop test


  • GOPATH compitable
  • Multiple build targets support
  • Put your projects out of global GOPATH


Please ensure you have installed the go command, GOP will invoke it on building or testing

go get

Directory structure

Every project should have a GOPATH directory structure and put a gop.yml int the root directory. This is an example project's directory tree.

<project root>
├── gop.yml
├── bin
├── doc
└── src
    ├── main
    │   └── main.go
    ├── models
    │   └── models.go
    ├── routes
    │   └── routes.go
    └── vendor
            ├── go-xorm
            │   ├── builder
            │   ├── core
            │   └── xorm
            └── lunny
                ├── log
                └── tango

View on Github

6 - Gopm:

Go Package Manager.

Gopm (Go Package Manager) is a Go package manage and build tool for Go.


  • Go development environment: >= go1.2


Install from source code

go get -u

The executable will be produced under $GOPATH/bin in your file system; for global use purpose, we recommend you to add this path into your PATH environment variable.


  • No requirement for installing any version control system tool like git or hg in order to download packages.
  • Download, install or build your packages with specific revisions.
  • When building programs with gopm build or gopm install, everything just happens in its own GOPATH and does not bother anything you've done (unless you told it to).
  • Can put your Go projects anywhere you want (through .gopmfile).


   Gopm - Go Package Manager

   Gopm [global options] command [command options] [arguments...]

   list		list all dependencies of current project
   gen		generate a gopmfile for current Go project
   get		fetch remote package(s) and dependencies
   bin		download and link dependencies and build binary
   config	configure gopm settings
   run		link dependencies and go run
   test		link dependencies and go test
   build	link dependencies and go build
   install	link dependencies and go install
   clean	clean all temporary files
   update	check and update gopm resources including itself
   help, h	Shows a list of commands or help for one command

   --noterm, -n		disable color output
   --strict, -s		strict mode
   --debug, -d		debug mode
   --help, -h		show help
   --version, -v	print the version

View on Github

7 - MVN-golang: 

Plugin that provides way for auto-loading of Golang SDK, dependency management and start build environment in Maven project infrastructure.

GO start!

Taste Go in just two commands!

mvn archetype:generate -B -DarchetypeGroupId=com.igormaznitsa -DarchetypeArtifactId=mvn-golang-hello -DarchetypeVersion=2.3.10 -DgroupId=com.go.test -DartifactId=gohello -Dversion=1.0-SNAPSHOT
mvn -f ./gohello/pom.xml package

The First command in th snippet above generates a maven project with some test files and the second command builds the project. Also you can take a look at the example Hello world project using the plugin

If you want to generate a multi-module project, then you can use such snippet

mvn archetype:generate -B -DarchetypeGroupId=com.igormaznitsa -DarchetypeArtifactId=mvn-golang-hello-multi -DarchetypeVersion=2.3.10 -DgroupId=com.go.test -DartifactId=gohello-multi -Dversion=1.0-SNAPSHOT


The Plug-in just wraps Golang tool-chain and allows to use strong maven based infrastructure to build Golang projects. It also can automatically download needed Golang SDK from the main server and tune needed version of packets for their branch, tag or revisions. Because a Golang project in the case is formed as just maven project, it is possible to work with it in any Java IDE which supports Maven. mvn-golang-wrapper

How it works

On start the plug-in makes below steps:

  • analyzing the current platform to generate needed distributive name (it can be defined directly through properties)
  • check that needed Golang SDK is already cached, if it is not cached then needed SDK will be loaded and unpacked from the main Golang SDK site
  • execute needed go lang tool bin/go with defined command, the source folder will be set as current folder
  • all folders of the project which are visible for maven (source folder, test folder, resource folders and test resource folders) are archived in ZIP file and the file is saved as a maven artifact into local maven repository (or remote maven repository). The generated ZIP artifact also contains information about dependencies which can be recognized by the plugin. In opposite to Java, Golang produces platform-dependent artifacts so that it doesn't make sense to share them through maven central but only resources and sources.

How to build

Because it is maven plugin, to build the plugin just use

mvn clean install -Pplugin

To save time, examples excluded from the main build process and activated through special profile

mvn clean install -Pexamples

View on Github

8 - Gpm:

Barebones dependency manager for Go.

Go Package Manager (or gpm, for short) is a tool that helps achieve reproducible builds for Go applications by specifying the revision of each external Go package that the application depends on.

Being simple and unobstrusive are some of the most important design choices for gpm: go get already provides a way to fetch dependencies, and relies on versions control systems like Git to do it, gpm adds the additional step of setting each dependency repo to the desired revision, neither Go or your application even know about any of this happening, it just works.

To achieve this, gpm uses a manifest file which is assumed to be called Godeps (although you can name it however you want), running gpm fetches all dependencies and ensures each is set to a specified version, down to revision level.

Basic usage

For a given project, running gpm in the directory containing the Godeps file is enough to make sure dependencies in the file are fetched and set to the correct revision.

However, if you share your GOPATH with other projects running gpm each time can get old, my solution for that is to isolate dependencies by manipulating the GOPATH, see the workspaces section for details.

You can see gpm in action under this workflow in the following gif:

sample gpm usage

Installation options

In OSX with Homebrew

$ brew install gpm

In Arch Linux - AUR

$ yaourt -S go-gpm


$ packer -S go-gpm

Caveat: you'll use go-gpm instead of just gpm in the command line, as there is a general purpose linux package under that name already.

View on Github

9 - Johnny-deps:

Minimal dependency version using Git.

Johnny Deps is a small tool from VividCortex that provides minimalistic dependency versioning for Go repositories using Git. Its primary purpose is to help create reproducible builds when many import paths in various repositories are required to build an application. It's based on a Perl script that provides subcommands for retrieving or building a project, or updating a dependencies file (called Godeps), listing first-level imports for a project.

Getting started

Install Johnny Deps by cloning the project's Github repository and running the provided scripts, like this:

git clone
cd johnny-deps
./configure --prefix=/your/path
make install

The --prefix option to configure is not mandatory; it defaults to /usr/local if not provided (but you'd have to install as root in that case). The binary will end up at the bin subdirectory, under the prefix you choose; make sure you have that location specified in your PATH.

Note that Perl is required, although that's probably provided by your system already. Also Go, Git and (if you're using makefiles) Make.


Johnny Deps is all about project dependencies. Each project should have a file called Godeps at its root, listing the full set of first-level dependencies; i.e., all repositories with Go packages imported directly by this project. The file may be omitted when empty and looks like this: 2fdf3f9fa715a998e834f09e07a8070d9046bcfd 1ffbbe58b5cf1bcfd7a80059dd339764cc1e3bff f82b14f1073afd7cb41fc8eb52673d78f481922e

The first column identifies the dependency. The second is the commit identifier for the exact revision the current project depends upon. You can use any identifier Git would accept to checkout, including abbreviated commits, tags and branches. Note, however, that the use of branches is discouraged, cause it leads to non-reproducible builds as the tip of the branch moves forward.

Introducing the tool

jd is Johnny Deps' main binary. It's a command line tool to retrieve projects from Github, check dependencies, reposition local working copies according to each project's settings, building and updating. It accepts subcommands much like go or git do:

jd [global-options] [command] [options] [project]

Global options apply to all commands. Some allow you to change the external tools that are used (go, git and make) in case you don't have them in your path, or otherwise want to use a different version. There's also a -v option to increase verbosity, that you can provide twice for extra effect. (Note that the tool runs silently by default, only displaying errors, if any.)

It's worth noting that all parameters are optional. If you don't specify a command, it will default to build (see "Building" below). If you don't specify a project, jd will try to infer the project based on your current working path, and your setting for GOPATH. If you're in a subdirectory of any of the GOPATH components, and you're also in a Git working tree, jd would be happy to fill up the project for you.

When in doubt, check jd help.

View on Github

10 - Modgv:

Converts 'go mod graph' output into Graphviz's DOT language.

Converts 'go mod graph' output into GraphViz's DOT language.

  • takes no options or arguments
  • it reads the output generated by “go mod graph” on stdin
  • generates a DOT language and writes to stdout


go mod graph | modgv | dot -Tpng -o graph.png

For each module:

  • the node representing the greatest version (i.e., the version chosen by Go's MVS algorithm) is colored green
  • other nodes, which aren't in the final build list, are colored grey


go get

Here 👉 how to install GraphViz for your OS.

Sample output (PNG)

go mod graph | modgv | dot -Tpng -o graph.png

Sample output (PDF with clickable links to module docs)

go mod graph | modgv | dot -Tps2 -o
ps2pdf graph.pdf

View on Github

Thank you for following this article.

Related videos:

Learning Golang: Dependencies, Modules and How to manage Packages

#go #golang #dependency #management 

10 Popular Go Libraries for Package & Dependency Management

10 Best Golang Libraries for Working with Dependency injection

In today's post we will learn about 10 Best Golang Libraries for Working with Dependency injection.

What is Dependency injection?

Dependency injection is basically providing the objects that an object needs (its dependencies) instead of having it construct them itself.

Table of contents:

  • Alice - Additive dependency injection container for Golang.
  • Di - A dependency injection container for go programming language.
  • Dig - A reflection based dependency injection toolkit for Go.
  • Dingo - A dependency injection toolkit for Go, based on Guice.
  • Do - A dependency injection framework based on Generics.
  • Fx - A dependency injection based application framework for Go (built on top of dig).
  • Gocontainer - Simple Dependency Injection Container.
  • Goioc/di - Spring-inspired Dependency Injection Container.
  • GoLobby/Container - GoLobby Container is a lightweight yet powerful IoC dependency injection container for the Go programming language.
  • Google/wire - Automated Initialization in Go.

1 - Alice: Additive dependency injection container for Golang.

Alice is an additive dependency injection container for Golang.


Design philosophy behind Alice:

  • The application components should not be aware of the existence of a DI container.
  • Use static Go files to define the object graph.
  • Developer has the freedom to choose the way to initialize objects.


$ go get


Alice is inspired by the design of Spring JavaConfig.

It usually takes 3 steps to use Alice.

Define modules

The instances to be managed by the container are defined in modules. There could be multiple modules organized by the functionality of the instances. Modules are usually placed in a separate package.

A typical module looks like this:

type ExampleModule struct {
    Foo Foo `alice:""`
    Bar Bar `alice:"Bar"`
    Baz Baz

func (m *ExampleModule) InstanceX() X {
    return X{m.Foo}

func (m *ExampleModule) InstanceY() Y {
    return Y{m.Baz}

A module struct must embed the alice.BaseModule struct. It allows 3 types of fields:

  • Field tagged by alice:"". It will be associated with the same or assignable type of instance defined in other modules.
  • Field tagged by alice:"Bar". It will be associated with the instance named Bar defined in other modules.
  • Field without alice tag. It will not be associated with any instance defined in other modules. It is expected to be provided when initializing the module. It is not managed by the container and could not be retrieved.

It is also common that no field is defined in a module struct.

Any public method of the module struct defines one instance to be intialized and maintained by the container. It is required to use a pointer receiver. The method name will be used as the instance name. The return type will be used as the instance type. Inside the method, it could use any field of the module struct to create new instances.

View on Github

2 - Di: A dependency injection container for go programming language.

Dependency injection for Go programming language.

Dependency injection is one form of the broader technique of inversion of control. It is used to increase modularity of the program and make it extensible.

This library helps you to organize responsibilities in your codebase and make it easy to combine low-level implementation into high-level behavior without boilerplate.


go get

What it looks like

package main

import (


func main() {
	// create container
	c, err := di.New(
		di.Provide(NewContext),  // provide application context
		di.Provide(NewServer),   // provide http server
		di.Provide(NewServeMux), // provide http serve mux
		// controllers as []Controller group
		di.Provide(NewOrderController, di.As(new(Controller))),
		di.Provide(NewUserController, di.As(new(Controller))),
	// handle container errors
	if err != nil {
	// invoke function
	if err := c.Invoke(StartServer); err != nil {

Full code available here.


If you have any questions, feel free to create an issue.

View on Github

3 - Dig: A reflection based dependency injection toolkit for Go.

Good for:

  • Powering an application framework, e.g. Fx.
  • Resolving the object graph during process startup.

Bad for:

  • Using in place of an application framework, e.g. Fx.
  • Resolving dependencies after the process has already started.
  • Exposing to user-land code as a Service Locator.


We recommend consuming SemVer major version 1 using your dependency manager of choice.

$ glide get '^1'
$ dep ensure -add ""
$ go get ''


This library is v1 and follows SemVer strictly.

No breaking changes will be made to exported APIs before v2.0.0.

View on Github

4 - Dingo: A dependency injection toolkit for Go, based on Guice.

Dingo works very similar to Guice

Basically one binds implementations/factories to interfaces, which are then resolved by Dingo.

Given that Dingo's idea is based on Guice we use similar examples in this documentation:

The following example shows a BillingService with two injected dependencies. Please note that Go's nature does not allow constructors, and does not allow decorations/annotations beside struct-tags, thus, we only use struct tags (and later arguments for providers).

Also, Go does not have a way to reference types (like Java's Something.class) we use either pointers or nil and cast it to a pointer to the interface we want to specify: (*Something)(nil). Dingo then knows how to dereference it properly and derive the correct type Something. This is not necessary for structs, where we can just use the null value via Something{}.

See the example folder for a complete example.

package example

type BillingService struct {
	processor CreditCardProcessor
	transactionLog TransactionLog

func (billingservice *BillingService) Inject(processor CreditCardProcessor, transactionLog TransactionLog) {
	billingservice.processor = processor
	billingservice.transactionLog = transactionLog

func (billingservice *BillingService) ChargeOrder(order PizzaOrder, creditCard CreditCard) Receipt {
	// ...

We want the BillingService to get certain dependencies, and configure this in a BillingModule which implements dingo.Module:

package example

type BillingModule struct {}

func (module *BillingModule) Configure(injector *dingo.Injector) {
	// This tells Dingo that whenever it sees a dependency on a TransactionLog, 
	// it should satisfy the dependency using a DatabaseTransactionLog. 

	// Similarly, this binding tells Dingo that when CreditCardProcessor is used in
	// a dependency, that should be satisfied with a PaypalCreditCardProcessor. 

Requesting injection

Every instance that is created through the container can use injection.

Dingo supports two ways of requesting dependencies that should be injected:

  • usage of struct tags to allow structs to request injection into fields. This should be used for public fields.
  • implement a public Inject(...) method to request injections of private fields. Dingo calls this method automatically and passes the requested injections.

For every requested injection (unless an exception applies) Dingo does the following:

  • Is there a binding? If so: delegate to the binding
    • Is the binding in a certain scope (Singleton)? If so, delegate to scope (might result in a new loop)
    • Binding is bound to an instance: inject instance
    • Binding is bound to a provider: call provider
    • Binding is bound to a type: request injection of this type (might return in a new loop to resolve the binding)
  • No binding? Try to create (only possible for concrete types, not interfaces or functions)

View on Github

5 - Do: A dependency injection framework based on Generics.

A dependency injection toolkit based on Go 1.18+ Generics.

This library implements the Dependency Injection design pattern. It may replace the uber/dig fantastic package in simple Go projects. samber/do uses Go 1.18+ generics instead of reflection and therefore is typesafe.

Why this name?

I love short name for such utility library. This name is the sum of DI and Go and no Go package currently uses this name.

💡 Features

  • Service registration
  • Service invocation
  • Service health check
  • Service shutdown
  • Service lifecycle hooks
  • Named or anonymous services
  • Eagerly or lazily loaded services
  • Dependency graph resolution
  • Default injector
  • Injector cloning
  • Service override

🚀 Services are loaded in invocation order.

🕵️ Service health can be checked individually or globally. Services implementing do.Healthcheckable interface will be called via do.HealthCheck[type]() or injector.HealthCheck().

🛑 Services can be shutdowned properly, in back-initialization order. Services implementing do.Shutdownable interface will be called via do.Shutdown[type]() or injector.Shutdown().

🚀 Install

go get

This library is v1 and follows SemVer strictly.

No breaking changes will be made to exported APIs before v2.0.0.

💡 Quick start

You can import do using:

import (

Then instanciate services:

func main() {
    injector := do.New()

    // provides CarService
    do.Provide(injector, NewCarService)

    // provides EngineService
    do.Provide(injector, NewEngineService)

    car := do.MustInvoke[*CarService](injector)
    // prints "car starting"

    // returns "engine broken"

    // injector.ShutdownOnSIGTERM()    // will block until receiving sigterm signal
    // prints "car stopped"


type EngineService interface{}

func NewEngineService(i *do.Injector) (EngineService, error) {
    return &engineServiceImplem{}, nil

type engineServiceImplem struct {}

// [Optional] Implements do.Healthcheckable.
func (c *engineServiceImplem) HealthCheck() error {
	return fmt.Errorf("engine broken")
func NewCarService(i *do.Injector) (*CarService, error) {
    engine := do.MustInvoke[EngineService](i)
    car := CarService{Engine: engine}
    return &car, nil

type CarService struct {
	Engine EngineService

func (c *CarService) Start() {
	println("car starting")

// [Optional] Implements do.Shutdownable.
func (c *CarService) Shutdown() error {
	println("car stopped")
	return nil

View on Github

6 - Fx: A dependency injection based application framework for Go (built on top of dig).

An application framework for Go that:

  • Makes dependency injection easy.
  • Eliminates the need for global state and func init().


We recommend locking to SemVer range ^1 using go mod:

go get


This library is v1 and follows SemVer strictly.

No breaking changes will be made to exported APIs before v2.0.0.

This project follows the Go Release Policy. Each major version of Go is supported until there are two newer major releases.

View on Github

7 - Gocontainer: Simple Dependency Injection Container.

gocontainer - Dependency Injection Container


First file main.go simply gets the repository from the container and prints it we use MustInvoke method to simply present the way where we keep type safety

package main

import (

func main() {
    gocontainer.MustInvoke("repository.mysql", func(r Repository) {

Our database implementation uses init() function to register db service

package database

import (


func NewDatabase() *sql.DB {
    db, _ := sql.Open("mysql", "dsn")

    return db

func init() {
    db := gocontainer.MustGet("db")

    gocontainer.Register("db", NewDatabase())

Our repository accesses earlier on registered db service and following the same patter uses init() function to register repository service within container

package repository

import (

    _ ""

type Repository interface {}

func NewRepository(db *sql.DB) Repository {
    return &mysqlRepository{db}

type mysqlRepository struct {
    db *sql.DB

func init() {
    db := gocontainer.MustGet("db")

    gocontainer.Register("repository.mysql", NewRepository(db.(*sql.DB)))

You can disable global container instance by setting gocontainer.GlobalContainer to nil. This package allows you to create many containers.

package main

import (

func main() {
    // disable global container instance
    gocontainer.GlobalContainer = nil

    mycontainer := gocontainer.New()
    mycontainer.Register("test", 1)

Please check GoDoc for more methods and examples.

View on Github

8 - Goioc/di: Spring-inspired Dependency Injection Container.

Why DI in Go? Why IoC at all?

I've been using Dependency Injection in Java for nearly 10 years via Spring Framework. I'm not saying that one can't live without it, but it's proven to be very useful for large enterprise-level applications. You may argue that Go follows a completely different ideology, values different principles and paradigms than Java, and DI is not needed in this better world. And I can even partly agree with that. And yet I decided to create this light-weight Spring-like library for Go. You are free to not use it, after all 🙂

Is it the only DI library for Go?

No, of course not. There's a bunch of libraries around which serve a similar purpose (I even took inspiration from some of them). The problem is that I was missing something in all of these libraries... Therefore I decided to create Yet Another IoC Container that would rule them all. You are more than welcome to use any other library, for example this nice project. And still, I'd recommend stopping by here 😉

So, how does it work?

It's better to show than to describe. Take a look at this toy-example (error-handling is omitted to minimize code snippets):


package services

import (

type WeatherService struct {

func (ws *WeatherService) Weather(city string) (*string, error) {
	response, err := http.Get("" + city)
	if err != nil {
		return nil, err
	all, err := ioutil.ReadAll(response.Body)
	if err != nil {
		return nil, err
	weather := string(all)
	return &weather, nil


package controllers

import (

type WeatherController struct {
	// note that injection works even with unexported fields
	weatherService *services.WeatherService `di.inject:"weatherService"`

func (wc *WeatherController) Weather(w http.ResponseWriter, r *http.Request) {
	weather, _ := wc.weatherService.Weather(r.URL.Query().Get("city"))
	_, _ = w.Write([]byte(*weather))

View on Github

9 - GoLobby/Container: GoLobby Container is a lightweight yet powerful IoC dependency injection container for the Go programming language.

GoLobby Container is a lightweight yet powerful IoC (dependency injection) container for Go projects. It's built neat, easy-to-use, and performance-in-mind to be your ultimate requirement.


  • Singleton and Transient bindings
  • Named dependencies (bindings)
  • Resolve by functions, variables, and structs
  • Must helpers that convert errors to panics
  • Optional lazy loading of bindings
  • Global instance for small applications


Required Go Versions

It requires Go v1.11 or newer versions.


To install this package, run the following command in your project directory.

go get


GoLobby Container is used to bind abstractions to their implementations. Binding is the process of introducing appropriate concretes (implementations) of abstractions to an IoC container. In this process, you also determine the resolving type, singleton or transient. In singleton bindings, the container provides an instance once and returns it for all the requests. In transient bindings, the container always returns a brand-new instance for each request. After the binding process, you can ask the IoC container to make the appropriate implementation of the abstraction that your code needs. Then your code will depend on abstractions, not implementations!

Quick Start

The following example demonstrates a simple binding and resolving.

// Bind Config interface to JsonConfig struct
err := container.Singleton(func() Config {
    return &JsonConfig{...}

var c Config
err := container.Resolve(&c)
// `c` will be the instance of JsonConfig

Typed Binding


The following snippet expresses singleton binding.

err := container.Singleton(func() Abstraction {
  return Implementation

// If you might return an error...

err := container.Singleton(func() (Abstraction, error) {
  return Implementation, nil

It takes a resolver (function) whose return type is the abstraction and the function body returns the concrete (implementation).

View on Github

10 - Google/wire: Automated Initialization in Go.

Wire is a code generation tool that automates connecting components using dependency injection. Dependencies between components are represented in Wire as function parameters, encouraging explicit initialization instead of global variables. Because Wire operates without runtime state or reflection, code written to be used with Wire is useful even for hand-written initialization.


Install Wire by running:

go install

and ensuring that $GOPATH/bin is added to your $PATH.


Project status

As of version v0.3.0, Wire is beta and is considered feature complete. It works well for the tasks it was designed to perform, and we prefer to keep it as simple as possible.

We'll not be accepting new features at this time, but will gladly accept bug reports and fixes.

View on Github

Thank you for following this article.

Related videos:

Dependency Injection Best Practices with the Go Context package

#go #golang #dependency #injection 

10 Best Golang Libraries for Working with Dependency injection

A Powerful Dependency injection Micro Container for JavaScript Apps


A powerful dependency injection micro container


BottleJS is a tiny, powerful dependency injection container. It features lazy loading, middleware hooks, decorators and a clean api inspired by the AngularJS Module API and the simple PHP library Pimple. You'll like BottleJS if you enjoy:

  • building a stack from components rather than a kitchen-sink framework.
  • uncoupled objects and dependency injection.
  • an API that makes sense.
  • lazily loaded objects.
  • trying cool stuff :smile:

Browser Support

BottleJS supports IE9+ and other ECMAScript 5 compliant browsers.


BottleJS can be used in a browser or in a nodejs app. It can be installed via bower or npm:

$ bower install bottlejs
$ npm install bottlejs

BottleJS is also available on cdnjs:

<script src=""></script>

Simple Example

The simplest recipe to get started with is Bottle#service. Say you have a constructor for a service object:

var Beer = function() { /* A beer service, :yum: */ };

You can register the constructor with Bottle#service:

var bottle = new Bottle();
bottle.service('Beer', Beer);

Later, when you need the constructed service, you just access the Beer property like this:


A lot happened behind the scenes:

  1. Bottle created a provider containing a factory function when you registered the Beer service.
  2. When the bottle.container.Beer property was accessed, Bottle looked up the provider and executed the factory to build and return the Beer service.
  3. The provider and factory were deleted, and the bottle.container.Beer property was set to be the Beer service instance. Accessing bottle.container.Beer in the future becomes a simple property lookup.

Injecting Dependencies

The above example is simple. But, what if the Beer service had dependencies? For example:

var Barley = function() {};
var Hops = function() {};
var Water = function() {};
var Beer = function(barley, hops, water) { /* A beer service, :yum: */ };

You can register services with Bottle#service and include dependencies like this:

var bottle = new Bottle();
bottle.service('Barley', Barley);
bottle.service('Hops', Hops);
bottle.service('Water', Water);
bottle.service('Beer', Beer, 'Barley', 'Hops', 'Water');

Now, when you access bottle.container.Beer, Bottle will lazily load all of the dependencies and inject them into your Beer service before returning it.

Service Factory

If you need more complex logic when generating a service, you can register a factory instead. A factory function receives the container as an argument, and should return your constructed service:

var bottle = new Bottle();
bottle.service('Barley', Barley);
bottle.service('Hops', Hops);
bottle.service('Water', Water);
bottle.factory('Beer', function(container) {
    var barley = container.Barley;
    var hops = container.Hops;
    var water = container.Water;

    return new Beer(barley, hops, water);

Service Provider

This is the meat of the Bottle library. The above methods Bottle#service and Bottle#factory are just shorthand for the provider function. You usually can get by with the simple functions above, but if you really need more granular control of your services in different environments, register them as a provider. To use it, pass a constructor for the provider that exposes a $get function. The $get function is used as a factory to build your service.

var bottle = new Bottle();
bottle.service('Barley', Barley);
bottle.service('Hops', Hops);
bottle.service('Water', Water);
bottle.provider('Beer', function() {
    // This environment may not support water.
    // We should polyfill it.
    if (waterNotSupported) {

    // this is the service factory.
    this.$get = function(container) {
        var barley = container.Barley;
        var hops = container.Hops;
        var water = container.Water;

        return new Beer(barley, hops, water);


Bottle supports injecting decorators into the provider pipeline with the Bottle#decorator method. Bottle decorators are just simple functions that intercept a service in the provider phase after it has been created, but before it is accessed for the first time. The function should return the service, or another object to be used as the service instead.

var bottle = new Bottle();
bottle.service('Beer', Beer);
bottle.service('Wine', Wine);
bottle.decorator(function(service) {
    // this decorator will be run for both Beer and Wine services.
    return service;

bottle.decorator('Wine', function(wine) {
    // this decorator will only affect the Wine service.
    return wine;


Bottle middleware are similar to decorators, but they are executed every time a service is accessed from the container. They are passed the service instance and a next function:

var bottle = new Bottle();
bottle.service('Beer', Beer);
bottle.middleware(function(service, next) {
    // this middleware will be executed for all services
    console.log('A service was accessed!');

bottle.middleware('Beer', function(beer, next) {
    // this middleware will only affect the Beer service.
    console.log('Beer?  Nice.  Tip your bartender...');

Middleware can pass an error object to the next function, and bottle will throw the error:

var bottle = new Bottle();
bottle.service('Beer', Beer);
bottle.middleware('Beer', function(beer, next) {
    if (beer.hasGoneBad()) {
        return next(new Error('The Beer has gone bad!'));

// results in Uncaught Error: The Beer has gone bad!(…)

Nested Bottles

Bottle will generate nested containers if dot notation is used in the service name. An isolated sub container will be created for you based on the name given:

var bottle = new Bottle();
var IPA = function() {};
bottle.service('Beer.IPA', IPA);
bottle.container.Beer; // this is a new Bottle.container object
bottle.container.Beer.IPA; // the service
bottle.factory('Beer.DoubleIPA', function (container) {
    var IPA = container.IPA; // note the container in here is the nearest parent.

Nested Containers Are Isolated

Nested containers are designed to provide isolation between different packages. This means that you cannot access a nested container from a different parent when you are writing a factory.

var bottle = new Bottle();
var IPA = function() {};
var Wort = function() {};
bottle.service('Ingredients.Wort', Wort);
bottle.factory('Beer.IPA', function(container) {
    // container is `Beer`, not the root, so:
    container.Wort; // undefined
    container.Ingredients.Wort; // undefined




Used to get an instance of bottle. If a name is passed, bottle will return the same instance. Calling the Bottle constructor as a function will call and return Bottle.pop, so Bottle.pop('Soda') === Bottle('Soda')

StringThe name of the bottle. If passed, bottle will store the instance internally and return the same instance if Bottle.pop is subsequently called with the same name.


Removes the named instance from bottle's internal store, if it exists. The immediately subsequent call to Bottle.pop(name) will return a new instance. If no name is given, all named instances will be cleared.

In general, this function should only be called in situations where you intend to reset the bottle instance with new providers, decorators, etc. such as test setup.

StringThe name of the bottle. If passed, bottle will remove the internal instance, if such a bottle was created using Bottle.pop. If not passed, all named internal instances will be cleared.




Used to list the names of all registered constants, values, and services on the container. Must pass a container to the global static version Bottle.list(bottle.container). The instance and container versions return the services that are registered within.

Returns an array of strings.

ObjectA bottle.container. Only required when using the global, static Bottle.list method. The prototype version uses that instance's container, and the container version uses itself.


A global configuration object.

strictBooleanfalseEnables strict mode. Currently only verifies that automatically injected dependencies are not undefined.



A collection of decorators registered by the bottle instance. See decorator(name, func) below


A collection of middleware registered by the bottle instance. See middleware(name, func) below.


A collection of nested bottles registered by the parent bottle instance when dot notation is used to define a service. See "Nested Bottles" section in the documentation above.


A collection of registered provider names. Bottle uses this internally to determine whether a provider has already instantiated it's instance. See provider(name, Provider) below.


An array of deferred functions registered for this bottle instance. See defer(func) below.

constant(name, value)

Used to add a read only value to the container.

nameStringThe name of the constant. Must be unique to each Bottle instance.
valueMixedA value that will be defined as enumerable, but not writable.

decorator(name, func)

container.$decorator(name, func)

Used to register a decorator function that the provider will use to modify your services at creation time. bottle.container.$decorator is an alias of bottle.decorator; this allows you to only add a decorator to a nested bottle.

StringThe name of the service this decorator will affect. Will run for all services if not passed.
funcFunctionA function that will accept the service as the first parameter. Should return the service, or a new object to be used as the service.


Register a function to be executed when Bottle#resolve is called.

funcFunctionA function to be called later. Will be passed a value given to Bottle#resolve.


Immediately instantiate an array of services and return their instances in the order of the array of instances.

servicesArrayArray of services that should be instantiated.

factory(name, Factory)

Used to register a service factory

nameStringThe name of the service. Must be unique to each Bottle instance.
FactoryFunctionA function that should return the service object. Will only be called once; the Service will be a singleton. Gets passed an instance of the container to allow dependency injection when creating the service.

instanceFactory(name, Factory)

Used to register a service instance factory that will return an instance when called.

nameStringThe name of the service. Must be unique to each Bottle instance.
FactoryFunctionA function that should return a fully configured service object. This factory function will be executed when a new instance is created. Gets passed an instance of the container.
var bottle = new Bottle();
var Hefeweizen = function(container) { return { abv: Math.random() * (6 - 4) + 4 }};
bottle.instanceFactory('Beer.Hefeweizen', Hefeweizen);

var hefeFactory = bottle.container.Beer.Hefeweizen; // This is an instance factory with a single `instance` method

var beer1 = hefeFactory.instance(); // Calls factory function to create a new instance
var beer2 = hefeFactory.instance(); // Calls factory function to create a second new instance

beer1 !== beer2 // true

This pattern is especially useful for request based context objects that store state or things like database connections. See the documentation for Google Guice's InjectingProviders for more examples.

middleware(name, func)

Used to register a middleware function. This function will be executed every time the service is accessed.

StringThe name of the service for which this middleware will be called. Will run for all services if not passed.
funcFunctionA function that will accept the service as the first parameter, and a next function as the second parameter. Should execute next() to allow other middleware in the stack to execute. Bottle will throw anything passed to the next function, i.e. next(new Error('error msg')).

provider(name, Provider)

Used to register a service provider

nameStringThe name of the service. Must be unique to each Bottle instance.
ProviderFunctionA constructor function that will be instantiated as a singleton. Should expose a function called $get that will be used as a factory to instantiate the service.


ArrayAn array of strings which contains names of the providers to be reset.

Used to reset providers for the next reference to re-instantiate the provider. If names param is passed, will reset only the named providers.



Used to register a service, factory, provider, or value based on properties of the Obj. bottle.container.$register is an alias of bottle.register; this allows factories and providers to register multiple services on the container without needing access to the bottle instance itself.

If Bottle.config.strict is set to true, this method will throw an error if an injected dependency is undefined.


An object or constructor with one of several properties:

  • Obj.$namerequired — the name used to register the object
  • Obj.$typeoptional — the method used to register the object. Defaults to 'service' in which case the Obj will be treated as a constructor. Valid types are: 'service', 'factory', 'provider', 'value'
  • Obj.$injectoptional — If Obj.$type is 'service', this property can be a string name or an array of names of dependencies to inject into the constructor.
    E.g. Obj.$inject = ['dep1', 'dep2'];
  • Obj.$valueoptional — Normally Obj is registered on the container. However, if this property is included, it's value will be registered on the container instead of the object itself. Useful for registering objects on the bottle container without modifying those objects with bottle specific keys.


Execute any deferred functions registered by Bottle#defer.

MixedValue to be passed to each deferred function as the first parameter.

service(name, Constructor [, dependency [, ...]])

Used to register a service constructor. If Bottle.config.strict is set to true, this method will throw an error if an injected dependency is undefined.

nameStringThe name of the service. Must be unique to each Bottle instance.
ConstructorFunctionA constructor function that will be instantiated as a singleton.
StringAn optional name for a dependency to be passed to the constructor. A dependency will be passed to the constructor for each name passed to Bottle#service in the order they are listed.

serviceFactory(name, factoryService [, dependency [, ...]])

Used to register a service factory function. Works exactly like factory except the factory arguments will be injected instead of receiving the container. This is useful when implementing the Module Pattern or adding dependencies to your Higher Order Functions.

function packageKeg(Barrel, Beer, Love) {
    Barrel.add(Beer, Love);
    return {
        tap : function() {
            return Barrel.dispense();
bottle.serviceFactory('Keg', packageKeg, 'Barrel', 'Beer', 'Love');

If Bottle.config.strict is set to true, this method will throw an error if an injected dependency is undefined.

nameStringThe name of the service. Must be unique to each Bottle instance.
serviceFactoryFunctionA function that will be invoked to create the service object/value.
StringAn optional name for a dependency to be passed to the service function. A dependency will be passed to the service function for each name passed to Bottle#serviceFactory in the order they are listed.

value(name, val)

Used to add an arbitrary value to the container.

nameStringThe name of the value. Must be unique to each Bottle instance.
valMixedA value that will be defined as enumerable, but not writable.


A TypeScript declaration file is bundled with this package. To get TypeScript to resolve it automatically, you need to set moduleResolution to node in your tsconfig.json.

Download Details:

Author: Young-steveo
Source Code: 
License: MIT license

#javascript #middleware #dependency 

A Powerful Dependency injection Micro Container for JavaScript Apps

BinaryProvider.jl: A Reliable Binary Provider for Julia


Basic concepts

Packages are installed to a Prefix; a folder that acts similar to the /usr/local directory on Unix-like systems, containing a bin folder for binaries, a lib folder for libraries, etc... Prefix objects can have tarballs install()'ed within them, uninstall()'ed from them, etc...

BinaryProvider has the concept of a Product, the result of a package installation. LibraryProduct and ExecutableProduct are two example Product object types that can be used to keep track of the binary objects installed by an install() invocation. Products can check to see if they are already satisfied (e.g. whether a file exists, or is executable, or is dlopen()'able), allowing for very quick and easy build.jl construction.

BinaryProvider also contains a platform abstraction layer for common operations like downloading and unpacking tarballs. The primary method you should be using to interact with these operations is through the install() method, however if you need more control, there are more fundamental methods such as download_verify(), or unpack(), or even the wittingly-named download_verify_unpack().

The method documentation within the BinaryProvider module should be considered the primary source of documentation for this package, usage examples are provided in the form of the LibFoo.jl mock package within this repository, as well as other packages that use this package for binary installation such as


To download and install a package into a Prefix, the basic syntax is:

prefix = Prefix("./deps")
install(url, tarball_hash; prefix=prefix)

It is recommended to inspect examples for a fuller treatment of installation, the LibFoo.jl package within this repository contains a deps/build.jl file that may be instructive.

To actually generate the tarballs that are installed by this package, check out the BinaryBuilder.jl package.


This package contains a run(::Cmd) wrapper class named OutputCollector that captures the output of shell commands, and in particular, captures the stdout and stderr streams separately, colorizing, buffering and timestamping appropriately to provide seamless printing of shell output in a consistent and intuitive way. Critically, it also allows for saving of the captured streams to log files, a very useful feature for BinaryBuilder.jl, which makes extensive use of this class, however all commands run by BinaryProvider.jl also use this same mechanism to provide coloring of stderr.

When providing ExecutableProducts to a client package, BinaryProvider will automatically append Julia's private library directory to LD_LIBRARY_PATH on Linux, and DYLD_LIBRARY_PATH on macOS. This is due to the fact that the compiled binaries may be dependent on libraries such as libgfortran, which ship with Julia and must be found by the system linker or else the binaries will not function. If you wish to use the binaries outside of Julia, you may need to override those environment variables in a similar fashion; see the generated deps.jl file for the check_deps() function where the precise overriding values can be found.

Download Details:

Author: JuliaPackaging
Source Code: 
License: View license

#julia #dependency 

BinaryProvider.jl: A Reliable Binary Provider for Julia
Nat  Grady

Nat Grady


Packrat: A Dependency Management System for R


Packrat has been soft-deprecated and is now superseded by renv.

While we will continue maintaining Packrat, all new development will focus on renv. If you're interested in switching to renv, you can use renv::migrate() to migrate a project from Packrat to renv.


Packrat is a dependency management system for R.

Use packrat to make your R projects more:

  • Isolated: Installing a new or updated package for one project won't break your other projects, and vice versa. That's because packrat gives each project its own private package library.
  • Portable: Easily transport your projects from one computer to another, even across different platforms. Packrat makes it easy to install the packages your project depends on.
  • Reproducible: Packrat records the exact package versions you depend on, and ensures those exact versions are the ones that get installed wherever you go.

See the project page for more information, or join the discussion on the RStudio Community forums.

Read the release notes to learn what's new in Packrat.

Quick-start Guide

Start by installing Packrat:


Then, start a new R session at the base directory of your project and type:


This will install Packrat, set up a private library to be used for this project, and then place you in packrat mode. While in packrat mode, calls to functions like install.packages and remove.packages will modify the private project library, rather than the user library.

When you want to manage the state of your private library, you can use the Packrat functions:

  • packrat::snapshot(): Save the current state of your library.
  • packrat::restore(): Restore the library state saved in the most recent snapshot.
  • packrat::clean(): Remove unused packages from your library.

Share a Packrat project with bundle and unbundle:

  • packrat::bundle(): Bundle a packrat project, for easy sharing.
  • packrat::unbundle(): Unbundle a packrat project, generating a project directory with libraries restored from the most recent snapshot.

Navigate projects and set/get options with:

  • packrat::on(), packrat::off(): Toggle packrat mode on and off, for navigating between projects within a single R session.
  • packrat::get_opts, packrat::set_opts: Get/set project-specific settings.

Manage ad-hoc local repositories (note that these are a separate entity from CRAN-like repositories):

  • packrat::set_opts(local.repos = ...) can be used to specify local repositories; that is, directories containing (unzipped) package sources.
  • packrat::install_local() installs packages available in a local repository.

For example, suppose I have the (unzipped) package sources for digest located within the folder~/git/R/digest/. To install this package, you can use:

packrat::set_opts(local.repos = "~/git/R")

There are also utility functions for using and managing packages in the external / user library, and can be useful for leveraging packages in the user library that you might not want as project-specific dependencies, e.g. devtools, knitr, roxygen2:

  • packrat::extlib(): Load an external package.
  • packrat::with_extlib(): With an external package, evaluate an expression. The external package is loaded only for the duration of the evaluated expression, but note that there may be other side effects associated with the package's .onLoad, .onAttach and .onUnload calls that we may not be able to fully control.


Packrat supports a set of common analytic workflows:

As-you-go: use packrat::init() to initialize packrat with your project, and use it to manage your project library while you develop your analysis. As you install and remove packages, you can use packrat::snapshot() and packrat::restore() to maintain the R packages in your project. For collaboration, you can either use your favourite version control system, or use packrat::bundle() to generate a bundled version of your project that collaborators can use with packrat::unbundle().

When-you're-done: take an existing or complete analysis (preferably collected within one directory), and call packrat::init() to immediately obtain R package sources for all packages used in your project, and snapshot that state so it can hence be preserved across time.

Setting up your own custom, CRAN-like repositories

Please view the set-up guide here for a simple walkthrough in how you might set up your own, local, custom CRAN repository.

Download Details:

Author: rstudio
Source Code: 

#r #dependency #management #system 

Packrat: A Dependency Management System for R

Colston.js: Fast, Lightweight and Zero Dependency Framework for Bunjs

🍥 Colston.js

Fast, lightweight and zero dependency framework for bunjs 🚀   


Bun is the lastest and arguably the fastest runtime environment for javascript, similar to node and deno. Bun uses JSC (JavaScriptCore) engine unlike node and deno which is the part of the reason why it's faster than node and deno.

Bun is written in a low-level manual memory management programming language called ZIG.

Bun supports ~90% of the native nodejs APIs including fs, pathetc and also distribute it's packages using npm hence both yarn and npm are supported in bun.

Colstonjs is a fast, minimal and higly configurable typescript based api framework highly inspired by Expressjs and fastify for building high performance APIs, colstonjs is completely built on bunjs.


🐎 Bun - Bun needs to be installed locally on your development machine.


💻 To install bun head over to the offical website and follow the installation instructions.

🧑‍💻 To install coltsonjs run

$ bun add colstonjs


Although colstonjs is distributed under npm, colstonjs is only available for bun, node and deno are not currently supported.


Importing the colstonjs into the application

import Colston from "colstonjs";

// initializing Colston 
const serverOptions = {
  port: 8000,
  env: "development"

// initialize app with server options
const app: Colston = new Colston(serverOptions);

A simple get request

// server.ts
app.get("/", function(ctx) {
  return ctx.status(200).text("OK"); // OK

To allow the application to accept requests, we have to call the start() method with an optional port and/or callback function.

This will start an http sever on the listening on all interfaces ( listening on the specified port.

// server.ts
server.start(port?, cb?);


port number can be passed into the app through the server options or the as the first argument of the start() mthod. If the the port number is passed as part of the server options and also in the start() mthod, then port number passed into to the start() takes priority. If no neither is provided, then the app will default to port 3000

callback method is immediately invoked once the connection is successfully established and the application is ready to accept requests.


Hello Bun

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app.set("port", 8000);

app.get("/", (ctx: Context) => {
  return ctx.status(200).json({ message: "Hello World!" });

// start the server 
app.start(app.get('port'), () => console.log(`server listening on port ${app.get("port")}`));

Read request body as json or text

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app.get("/", async (ctx: Context) => {
  const body = await ctx.request.json();
  const body2 = await ctx.request.text();

  return ctx.status(200).json({ body, body2 });


Using named parameters

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app.get("/user/:id/name/:name", async (ctx: Context) => {
  const user = ctx.request.params;

  // make an api call to a backend datastore a to retrieve usre details
  const userDetails = await getUserDetails(; // e.g: { id: 12345, name: "jane"}

  return ctx.status(200).json({ user: userDetails});


Using query parameters

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app.get('/?name&age', async (ctx: Context) => {
  const query = ctx.request.query;

  return ctx.status(200).json(query); // { name: "jane", age: 50 }


Method chaining

Colstonjs also provide the flexibility of method chaining, create one app instance and chain all methods on that single instance.

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

  .get("/one", (ctx: Context) => {
      return ctx.status(200).text("One");
  .post("/two", (ctx: Context) => {
      return ctx.status(200).text("Two");
  .patch("/three", (ctx: Context) => {
      return ctx.status(200).text("Three");


Running the demo note-app

Follow the steps below to run the demo note-taking api application in the examplesdirectory.

  • Clone this repository
  • Change directory into the note-app folder by running cd examples/note-app
  • Start the http server to listen on port 8000 by running bun app.js
  • User your favourite http client (e.g Postman) to make requests to the listening http server.



Colstonjs support both route level middleware as well as app level middleware.

Application-level middleware

This is a middleware which will be called on each request made to the server, one use case can be for logging.

// logger.ts
export function logger(ctx) {
  const { pathname } = new URL(ctx.request.url);[new Date()], " - - " + ctx.request.method + " " + pathname + " HTTP 1.1" + " - ");

// server.ts
import Colston, { type Context } from "colstonjs";
import { logger } from "./logger";

const app: Colston = new Colston({ env: "development" });

// middleware
app.use(logger); // [2022-07-16T01:01:00.327Z] - - GET / HTTP 1.1 - 

app.get("/", (ctx: Context) => {
  return ctx.status(200).text("Hello logs...");


The .use() accepts k numbers of middleware function.

app.use(fn-1, fn-2, fn-3, ..., fn-k)

Route-level middleware

Colston on the other hand allows you to add a middleware function in-between the route path and the handler function.

// request-id.ts
export function requestID(ctx) { = crypto.randomBytes(18).toString('hex');

// server.ts
import crypto from "crypto";
import Colston, { type Context } from "colstonjs";
import { requestID } from "./request-id";

const app: Colston = new Colston({ env: "development" });

app.get("/", requestID, (ctx: Context) => {
  return ctx.status(200).text(`id: ${}`); // id: 410796b6d64e3dcc1802f290dc2f32155c5b


It is also worthy to note that we can also have k numbers of route-level middleware functions

// server.ts
app.get("/", middleware-1, middleware-2, middleware-3, ..., middleware-k, (ctx: Context) => { 
  return ctx.status(200).text(`id: ${}`);

Context locals

ctx.locals is a plain javascript object that is specifically added to allow sharing of data amongst the chain of middlewares and/or handler functions.

// server.ts
let requestCount = 0;"/request-count", (ctx, next) => {
   * req.locals can be used to pass
   * data from one middleware to another 
  ctx.locals.requestCount = requestCount;
}, (ctx, next) => {
}, (ctx) => {
  let count = ctx.locals.requestCount;
  return ctx.status(200).text(count); // 1


Instantiating Router class

Router class provide a way to separate router specific declaration/blocks from the app logic, by providing that extra abstraction layer for your project.

// router.ts
import Router from "Router";

// instantiate the router class
const router1 = new Router();
const router2 = new Router();

// define user routes - can be in a separate file or module.'/user', (ctx) => { return ctx.status(200).json({ user }) });
router1.get('/users', (ctx) => { return ctx.json({ users }) });
router1.delete('/user?id', (ctx) => { return ctx.status(204).head() });

// define the notes route - can also be in separate module.
router2.get('/note/:id', (ctx) => { return ctx.json({ note }) });
router2.get('/notes', (ctx) => { return ctx.json({ notes }) });'/note', (ctx) => { return ctx.status(201).json({ note }) });

export { router1, router2 };

Injecting Router instance into the app

// server.ts
import Colston from "colstonjs";
import { router1, router2 } from "./router";

const app: Colston = new Colston();

app.all(router1, router2);

// other routes can still be defined here
app.get("/", (ctx) => {
  return ctx.status(200).text("Welcome to colstonjs framework for bun");


The app.all() method takes in k numbers of router instance objects e.g app.all(router-1, router-2, ..., router-k);. The example folder contains a full note taking backend app that utilizes this pattern.

Application instance cache

We can cache simple data which will leave throughout the application instance lifecycle.

import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

// set properties to cache
app.set("age", 50);
app.set("name", "jane doe");

// check if a key exists in the cache
app.has("age"); // true
app.has("name"); // true

// retrieve the value stored in a given key
app.get("age"); // 50
app.get("name"); // jane doe


Error handler

Errors are handled internally by colstonjs, however this error handler method can aslo be customised.

// index.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

// a broken route
app.get("/error", (ctx) => {
  throw new Error("This is a broken route");

// Custom error handler
app.error = async function (error) {
  console.error("This is an error...");
  return Response.json(JSON.stringify(
    // return custom error here
    const err = JSON.stringify(error);
    new Error(error.message || "An error occurred" + err);
  ), { status: 500 });



Click to expand

Benchmarking was performed using k6 load testing library.


Colsonjs on bunjs runtime environment

import Colston from "colstonjs";

const app = new Colston({ env: "development" });

app.get("/", (ctx) => {
  return ctx.text("OK");

$ ./k6 run index.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: index.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 40s max duration (incl. graceful stop):
           * default: 100 looping VUs for 10s (gracefulStop: 30s)

running (10.0s), 000/100 VUs, 240267 complete and 0 interrupted iterations
default ✓ [======================================] 100 VUs  10s

     ✓ success

     checks.........................: 100.00% ✓ 240267       ✗ 0     
     data_received..................: 16 MB   1.6 MB/s
     data_sent......................: 19 MB   1.9 MB/s
     http_req_blocked...............: avg=1.42µs  min=0s       med=1µs    max=9.24ms  p(90)=1µs    p(95)=2µs   
     http_req_connecting............: avg=192ns   min=0s       med=0s     max=2.18ms  p(90)=0s     p(95)=0s    
     http_req_duration..............: avg=4.1ms   min=89µs     med=3.71ms max=41.18ms p(90)=5.3ms  p(95)=6.53ms
       { expected_response:true }...: avg=4.1ms   min=89µs     med=3.71ms max=41.18ms p(90)=5.3ms  p(95)=6.53ms
     http_req_failed................: 0.00%   ✓ 0            ✗ 240267
     http_req_receiving.............: avg=24.17µs min=7µs      med=12µs   max=15.01ms p(90)=18µs   p(95)=21µs  
     http_req_sending...............: avg=6.33µs  min=3µs      med=4µs    max=14.78ms p(90)=7µs    p(95)=8µs   
     http_req_tls_handshaking.......: avg=0s      min=0s       med=0s     max=0s      p(90)=0s     p(95)=0s    
     http_req_waiting...............: avg=4.07ms  min=75µs     med=3.69ms max=41.16ms p(90)=5.27ms p(95)=6.48ms
     http_reqs......................: 240267  24011.563111/s
     iteration_duration.............: avg=4.15ms  min=117.88µs med=3.74ms max=41.25ms p(90)=5.37ms p(95)=6.62ms
     iterations.....................: 240267  24011.563111/s
     vus............................: 100     min=100        max=100 
     vus_max........................: 100     min=100        max=100 


Expressjs on nodejs runtime environment

const express = require("express");
const app = express();

app.get("/", (req, res) => {

$ ~/k6 run index.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: index.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 40s max duration (incl. graceful stop):
           * default: 100 looping VUs for 10s (gracefulStop: 30s)

running (10.0s), 000/100 VUs, 88314 complete and 0 interrupted iterations
default ✓ [======================================] 100 VUs  10s

     ✓ success

     checks.........................: 100.00% ✓ 88314       ✗ 0    
     data_received..................: 20 MB   2.0 MB/s
     data_sent......................: 7.1 MB  705 kB/s
     http_req_blocked...............: avg=1.54µs  min=0s     med=1µs     max=2.04ms  p(90)=1µs     p(95)=2µs    
     http_req_connecting............: avg=451ns   min=0s     med=0s      max=1.99ms  p(90)=0s      p(95)=0s     
     http_req_duration..............: avg=11.28ms min=1.22ms med=10.04ms max=90.96ms p(90)=15.04ms p(95)=18.71ms
       { expected_response:true }...: avg=11.28ms min=1.22ms med=10.04ms max=90.96ms p(90)=15.04ms p(95)=18.71ms
     http_req_failed................: 0.00%   ✓ 0           ✗ 88314
     http_req_receiving.............: avg=18.18µs min=10µs   med=15µs    max=10.16ms p(90)=22µs    p(95)=25µs   
     http_req_sending...............: avg=6.53µs  min=3µs    med=5µs     max=12.61ms p(90)=8µs     p(95)=9µs    
     http_req_tls_handshaking.......: avg=0s      min=0s     med=0s      max=0s      p(90)=0s      p(95)=0s     
     http_req_waiting...............: avg=11.25ms min=1.2ms  med=10.01ms max=90.93ms p(90)=15ms    p(95)=18.68ms
     http_reqs......................: 88314   8818.015135/s
     iteration_duration.............: avg=11.32ms min=1.25ms med=10.08ms max=91.01ms p(90)=15.08ms p(95)=18.76ms
     iterations.....................: 88314   8818.015135/s
     vus............................: 100     min=100       max=100
     vus_max........................: 100     min=100       max=100

From the above results we can see that Colsonjs on bun handles ~ 2.72x number of requests per second when compared with Expressjs on node, benchmarking files can be found in this repository.


PRs for features, enhancements and bug fixes are welcomed. ✨ You can also look at the todo file for feature contributions. 🙏🏽


See the TODO doc here, feel free to also add to the list by editing the TODO file.


Although this version is fairly stable, it is actively still under development so also is bunjs and might contain some bugs, hence, not ideal for a production app.

Author: Ajimae
Source Code: 
License: MIT license

#javascript #typescript #framework #dependency 

Colston.js: Fast, Lightweight and Zero Dependency Framework for Bunjs
Antwan  Larson

Antwan Larson


How To Add A Gradle Dependency (Step by Step)

To use code from another library you need to add a dependency to your Gradle project. Discover how to do that properly, including configuring where Gradle pulls dependencies from and controlling how dependencies end up on the Java classpaths.

▶️Why we need dependencies 0:00
▶️Co-ordinates for locating dependencies 0:34
▶️Configuring Gradle repositories 1:20
▶️Defining your dependency 1:53
▶️The secret to inspecting the Java classpaths 3:22
▶️The 2 dependency notations 3:55
▶️Other popular dependency configurations 4:19
▶️IDE shortcut to easily add dependencies 4:49

#gradle #java #dependency 

How To Add A Gradle Dependency (Step by Step)