1678584120
Knip finds unused files, dependencies and exports in your JavaScript and TypeScript projects. Less code and dependencies leads to improved performance, less maintenance and easier refactorings.
export const myVar = true;
ESLint handles files in isolation, so it does not know whether myVar
is actually used somewhere else. Knip lints the project as a whole, and finds unused exports, files and dependencies.
It's only human to forget removing things that you no longer use. But how do you find out? Where to even start finding things that can be removed?
The dots don't connect themselves. This is where Knip comes in:
package.json
tsconfig.json
, or TypeScript allowJs: true
)Knip shines in both small and large projects. It's a fresh take on keeping your projects clean & tidy!
“An orange cow with scissors, Van Gogh style” - generated with OpenAI
When coming from version v0.13.3 or before, please see migration to v1.
The next major release is upcoming. Please see https://github.com/webpro/knip/issues/73 for the full story. Use npm install knip@next
to try it out if you're curious! No changes in configuration necessary. Find the updated documentation at https://github.com/webpro/knip/blob/v2/README.md.
Are you seeing false positives? Please report them by opening an issue in this repo. Bonus points for linking to a public repository using Knip, or even opening a pull request with a directory and example files in test/fixtures
. Correctness and bug fixes have priority over performance and new features.
Also see the FAQ.
npm install -D knip
Knip supports LTS versions of Node.js, and currently requires at least Node.js v16.17 or v18.6. Knip is cutting edge!
Knip has good defaults and you can run it without any configuration, but especially larger projects get more out of Knip with a configuration file (or a knip
property in package.json
). Let's name this file knip.json
with these contents (you might want to adjust right away for your project):
{
"$schema": "https://unpkg.com/knip@1/schema.json",
"entry": ["src/index.ts"],
"project": ["src/**/*.ts"]
}
The entry
files target the starting point(s) to resolve the rest of the imported code. The project
files should contain all files to match against the files resolved from the entry files, including potentially unused files.
Use knip.ts
with TypeScript if you prefer:
import type { KnipConfig } from 'knip';
const config: KnipConfig = {
entry: ['src/index.ts'],
project: ['src/**/*.ts'],
};
export default config;
If you have, please see workspaces & monorepos.
Then run the checks with npx knip
. Or first add this script to package.json
:
{
"scripts": {
"knip": "knip"
}
}
Use npm run knip
to analyze the project and output unused files, dependencies and exports. Knip works just fine with yarn
or pnpm
as well.
$ npx knip --help
✂️ Find unused files, dependencies and exports in your JavaScript and TypeScript projects
Usage: knip [options]
Options:
-c, --config [file] Configuration file path (default: [.]knip.json[c], knip.js, knip.ts or package.json#knip)
-t, --tsConfig [file] TypeScript configuration path (default: tsconfig.json)
--production Analyze only production source files (e.g. no tests, devDependencies, exported types)
--strict Consider only direct dependencies of workspace (not devDependencies, not other workspaces)
--workspace Analyze a single workspace (default: analyze all configured workspaces)
--include-entry-exports Include unused exports in entry files (without `@public`)
--ignore Ignore files matching this glob pattern, can be repeated
--no-gitignore Don't use .gitignore
--include Report only provided issue type(s), can be comma-separated or repeated (1)
--exclude Exclude provided issue type(s) from report, can be comma-separated or repeated (1)
--dependencies Shortcut for --include dependencies,unlisted
--exports Shortcut for --include exports,nsExports,classMembers,types,nsTypes,enumMembers,duplicates
--no-progress Don't show dynamic progress updates
--reporter Select reporter: symbols, compact, codeowners, json (default: symbols)
--reporter-options Pass extra options to the reporter (as JSON string, see example)
--no-exit-code Always exit with code zero (0)
--max-issues Maximum number of issues before non-zero exit code (default: 0)
--debug Show debug output
--debug-file-filter Filter for files in debug output (regex as string)
--performance Measure running time of expensive functions and display stats table
--h, --help Print this help text
--V, version Print version
(1) Issue types: files, dependencies, unlisted, exports, nsExports, classMembers, types, nsTypes, enumMembers, duplicates
Examples:
$ knip
$ knip --production
$ knip --workspace packages/client --include files,dependencies
$ knip -c ./config/knip.json --reporter compact
$ knip --reporter codeowners --reporter-options '{"path":".github/CODEOWNERS"}'
$ knip --debug --debug-file-filter '(specific|particular)-module'
More documentation and bug reports: https://github.com/webpro/knip
Here's an example run using the default reporter:
This example shows more output related to unused and unlisted dependencies:
The report contains the following types of issues:
When an issue type has zero issues, it is not shown.
(1) This includes imports that could not be resolved.
(2) The variable or type is not referenced directly, and has become a member of a namespace. Knip can't find a reference to it, so you can probably remove it.
You can --include
or --exclude
any of the types to slice & dice the report to your needs. Alternatively, they can be added to the configuration (e.g. "exclude": ["dependencies"]
).
Knip finds issues of type files
, dependencies
, unlisted
and duplicates
very fast. Finding unused exports requires deeper analysis (exports
, nsExports
, classMembers
, types
, nsTypes
, enumMembers
).
Use --include
to report only specific issue types (the following example commands do the same):
knip --include files --include dependencies
knip --include files,dependencies
Use --exclude
to ignore reports you're not interested in:
knip --include files --exclude classMembers,enumMembers
Use --dependencies
or --exports
as shortcuts to combine groups of related types.
Still not happy with the results? Getting too much output/false positives? The FAQ may be useful. Feel free to open an issue and I'm happy to look into it. Also see the next section on how to ignore certain false positives:
There are a few ways to tell Knip to ignore certain packages, binaries, dependencies and workspaces. Some examples:
{
"ignore": ["**/*.d.ts", "**/fixtures"],
"ignoreBinaries": ["zip", "docker-compose"],
"ignoreDependencies": ["hidden-package"],
"ignoreWorkspaces": ["packages/deno-lib"]
}
This is the fun part! Knip, knip, knip ✂️
As always, make sure to backup files or use Git before deleting files or making changes. Run tests to verify results.
package.json
.package.json
.export
keyword in front of unused exports. Then you can see whether the variable or type is used within the same file. If this is not the case, it can be removed.🔁 Repeat the process to reveal new unused files and exports. Sometimes it's so liberating to remove things!
Workspaces and monorepos are handled out-of-the-box by Knip. Every workspace that is part of the Knip configuration will be part of the analysis. Here's an example:
{
"ignoreWorkspaces": ["packages/ignore-me"],
"workspaces": {
".": {
"entry": "src/index.ts",
"project": "src/**/*.ts"
},
"packages/*": {
"entry": "{index,cli}.ts",
"project": "**/*.ts"
},
"packages/my-lib": {
"entry": "main.js"
}
}
}
Note that if you have a root workspace, it must be under workspaces
and have the "."
key like in the example.
Knip supports workspaces as defined in three possible locations:
workspaces
array in package.json
.workspaces.packages
array in package.json
.packages
array in pnpm-workspace.yaml
.Every directory with a match in workspaces
of knip.json
is part of the analysis.
Extra "workspaces" not configured as a workspace in the root package.json
can be configured as well, Knip is happy to analyze unused dependencies and exports from any directory with a package.json
.
Here's some example output when running Knip in a workspace:
Knip contains a growing list of plugins:
Plugins are automatically activated. Each plugin is automatically enabled based on simple heuristics. Most of them check whether one or one of a few (dev) dependencies are listed in package.json
. Once enabled, they add a set of configuration and/or entry files for Knip to analyze. These defaults can be overriden.
Most plugins use one or both of the following file types:
config
- custom dependency resolvers are applied to the config filesentry
- files to include with the analysis of the rest of the source codeSee each plugin's documentation for its default values.
config
Plugins may include config
files. They are parsed by the plugin's custom dependency resolver. Here are some examples to get an idea of how they work and why they are needed:
eslint
plugin tells Knip that the "prettier"
entry in the array of plugins
means that the eslint-plugin-prettier
dependency should be installed. Or that the "airbnb"
entry in extends
requires the eslint-config-airbnb
dependency.storybook
plugin understands that core.builder: 'webpack5'
in main.js
means that the @storybook/builder-webpack5
and @storybook/manager-webpack5
dependencies are required.Custom dependency resolvers return all referenced dependencies for the configuration files it is given. Knip handles the rest to find which of those dependencies are unused or missing.
entry
Other configuration files use require
or import
statements to use dependencies, so they can be analyzed like the rest of the source files. These configuration files are also considered entry
files.
For plugins related to test files, it's good to know that the following glob patterns are always included by default (see TEST_FILE_PATTERNS in constants.ts):
**/*.{test,spec}.{js,jsx,ts,tsx,mjs,cjs}
**/__tests__/**/*.{js,jsx,ts,tsx,mjs,cjs}
test/**/*.{js,jsx,ts,tsx,mjs,cjs}
In case a plugin causes issues, it can be disabled by using false
as its value (e.g. "webpack": false
).
Getting false positives because a plugin is missing? Want to help out? Feel free to add your own plugin! Here's how to get started:
npm run create-plugin -- --name [myplugin]
The default mode for Knip is holistic and targets all project code, including configuration files and tests. Test files usually import production files. This prevents the production files or its exports from being reported as unused, while sometimes both of them can be removed. This is why Knip has a "production mode".
To tell Knip what is production code, add an exclamation mark behind each pattern!
that is meant for production and use the --production
flag. Here's an example:
{
"entry": ["src/index.ts!", "build/script.js"],
"project": ["src/**/*.ts!", "build/*.js"]
}
Here's what's included in production mode analysis:
entry
and project
patterns suffixed with !
.entry
patterns from plugins exported as PRODUCTION_ENTRY_FILE_PATTERNS
(such as Next.js and Gatsby).postinstall
and start
script (e.g. not the test
or other npm scripts in package.json
).exports
, nsExports
and classMembers
are included in the report (types
, nsTypes
, enumMembers
are ignored).Additionally, the --strict
flag can be used to:
dependencies
(not devDependencies
) when finding unused or unlisted dependencies.import type {}
).dependencies
(and not rely on packages of ancestor workspaces).Plugins also have this distinction. For instance, Next.js entry files for pages (pages/**/*.tsx
) and Remix routes (app/routes/**/*.tsx
) are production code, while Jest and Playwright entry files (e.g. *.spec.ts
) are not. All of this is handled automatically by Knip and its plugins. You only need to point Knip to additional files or custom file locations. The more plugins Knip will have, the more projects can be analyzed out of the box!
Tools like TypeScript, Webpack and Babel support import aliases in various ways. Knip automatically includes compilerOptions.paths
from the TypeScript configuration, but does not (yet) automatically find other types of import aliases. They can be configured manually:
{
"$schema": "https://unpkg.com/knip@1/schema.json",
"paths": {
"@lib": ["./lib/index.ts"],
"@lib/*": ["./lib/*"]
}
}
Each workspace can also have its own paths
configured. Note that Knip paths
follow the TypeScript semantics:
*
are exact matches.Knip provides the following built-in reporters:
codeowners
compact
json
symbol
(default)The compact
reporter shows the sorted files first, and then a list of symbols:
When the provided built-in reporters are not sufficient, a custom reporter can be implemented.
Pass --reporter ./my-reporter
, with the default export of that module having this interface:
type Reporter = (options: ReporterOptions) => void;
type ReporterOptions = {
report: Report;
issues: Issues;
cwd: string;
workingDir: string;
isProduction: boolean;
options: string;
};
The data can then be used to write issues to stdout
, a JSON or CSV file, or sent to a service.
Find more details and ideas in custom reporters.
Libraries and applications are identical when it comes to files and dependencies: whatever is unused should be removed. Yet libraries usually have exports meant to be used by other libraries or applications. Such public variables and types in libraries can be marked with the JSDoc @public
tag:
/**
* Merge two objects.
*
* @public
*/
export const merge = function () {};
Knip does not report public exports and types as unused.
There are already some great packages available if you want to find unused dependencies OR unused exports.
I love the Unix philosophy ("do one thing well"). But in this case I believe it's efficient to handle multiple concerns in a single tool. When building a dependency graph of the project, an abstract syntax tree for each file, and traversing all of this, why not collect the various issues in one go?
The structure and configuration of projects and their dependencies vary wildly, and no matter how well-balanced, defaults only get you so far. Some implementations and some tools out there have smart or unconventional ways to import code, making things more complicated. That's why Knip tends to require more configuration in larger projects, based on how many dependencies are used and how much the configuration in the project diverges from the defaults.
One important goal of Knip is to minimize the amount of configuration necessary. When you false positives are reported and you think there are feasible ways to infer things automatically, reducing the amount of configuration, please open an issue.
When the list of unused files is too long, this means the gap between the set of entry
and the set of project
files needs tweaking. The gap can be narrowed down by increasing the entry
files or reducing the project
files, for instance by ignoring specific folders that are not related to the source code imported by the entry
files.
Dependencies that are only imported in unused files are also marked as unused. So a long list of unused files would be good to remedy first.
When unused dependencies are related to dependencies having a Knip plugin, maybe the config
and/or entry
files for that dependency are at custom locations. The default values are at the plugin's documentation, and can be overridden to match the custom location(s).
When the dependencies don't have a Knip plugin yet, please file an issue or create a new plugin.
When the project is a library and the exports are meant to be used by consumers of the library, there are two options:
entry
files are not reported. You could re-export from an existing entry file, or add the containing file to the entry
array in the configuration.@public
tag.Eventually this type of QA only really works when it's tied to an automated workflow. But with too many issues to resolve this might not be feasible right away, especially in existing larger codebase. Here are a few options that may help:
--no-exit-code
for exit code 0 in CI.--include
(or --exclude
) to report only the issue types that have little or no errors.--dependencies
and/or --exports
Knip command.ignore
(for files and directories) and ignoreDependencies
to filter out some problematic areas.knip.json
.All of this is hiding problems, so please make sure to plan for fixing them and/or open issues here for false positives.
This table is an ongoing comparison. Based on their docs (please report any mistakes):
Feature | knip | [depcheck][54] | [unimported][55] | [ts-unused-exports][56] | [ts-prune][57] |
---|---|---|---|---|---|
Unused files | ✅ | - | ✅ | - | - |
Unused dependencies | ✅ | ✅ | ✅ | - | - |
Unlisted dependencies | ✅ | ✅ | ✅ | - | - |
[Plugins][1] | ✅ | ✅ | ❌ | - | - |
Unused exports | ✅ | - | - | ✅ | ✅ |
Unused class members | ✅ | - | - | - | - |
Unused enum members | ✅ | - | - | - | - |
Duplicate exports | ✅ | - | - | ❌ | ❌ |
Search namespaces | ✅ | - | - | ✅ | ❌ |
Custom reporters | ✅ | - | - | - | - |
JavaScript support | ✅ | ✅ | ✅ | - | - |
Configure entry files | ✅ | ❌ | ✅ | ❌ | ❌ |
[Support workspaces/monorepos][52] | ✅ | ❌ | ❌ | - | - |
ESLint plugin available | - | - | - | ✅ | - |
✅ = Supported, ❌ = Not supported, - = Out of scope
The following commands are similar:
depcheck
knip --dependencies
The following commands are similar:
unimported
knip --production --dependencies --include files
Also see production mode.
The following commands are similar:
ts-unused-exports
knip --include exports,types,nsExports,nsTypes
knip --exports # Adds unused enum and class members
The following commands are similar:
ts-prune
knip --include exports,types
knip --exports # Adds unused exports/types in namespaces and unused enum/class members
TypeScript language services could play a major role in most of the "unused" areas, as they have an overview of the project as a whole. This powers things in VS Code like "Find references" or the "Module "./some" declares 'Thing' locally, but it is not exported" message. I think features like "duplicate exports" or "custom dependency resolvers" are userland territory, much like code linters.
Knip is Dutch for a "cut". A Dutch expression is "to be geknipt for something", which means to be perfectly suited for the job. I'm motivated to make knip perfectly suited for the job of cutting projects to perfection! ✂️
Author: Webpro
Source Code: https://github.com/webpro/knip
License: ISC license
1677750300
Poetry helps you declare, manage and install dependencies of Python projects, ensuring you have the right stack everywhere.
Poetry replaces setup.py
, requirements.txt
, setup.cfg
, MANIFEST.in
and Pipfile
with a simple pyproject.toml
based project format.
[tool.poetry]
name = "my-package"
version = "0.1.0"
description = "The description of the package"
license = "MIT"
authors = [
"Sébastien Eustace <sebastien@eustace.io>"
]
repository = "https://github.com/python-poetry/poetry"
homepage = "https://python-poetry.org"
# README file(s) are used as the package description
readme = ["README.md", "LICENSE"]
# Keywords (translated to tags on the package index)
keywords = ["packaging", "poetry"]
[tool.poetry.dependencies]
# Compatible Python versions
python = ">=3.8"
# Standard dependency with semver constraints
aiohttp = "^3.8.1"
# Dependency with extras
requests = { version = "^2.28", extras = ["security"] }
# Version-specific dependencies with prereleases allowed
tomli = { version = "^2.0.1", python = "<3.11", allow-prereleases = true }
# Git dependencies
cleo = { git = "https://github.com/python-poetry/cleo.git", branch = "master" }
# Optional dependencies (installed by extras)
pendulum = { version = "^2.1.2", optional = true }
# Dependency groups are supported for organizing your dependencies
[tool.poetry.group.dev.dependencies]
pytest = "^7.1.2"
pytest-cov = "^3.0"
# ...and can be installed only when explicitly requested
[tool.poetry.group.docs]
optional = true
[tool.poetry.group.docs.dependencies]
Sphinx = "^5.1.1"
# Python-style entrypoints and scripts are easily expressed
[tool.poetry.scripts]
my-script = "my_package:main"
Poetry supports multiple installation methods, including a simple script found at install.python-poetry.org. For full installation instructions, including advanced usage of the script, alternate install methods, and CI best practices, see the full installation documentation.
Documentation for the current version of Poetry (as well as the development branch and recently out of support versions) is available from the official website.
Poetry is a large, complex project always in need of contributors. For those new to the project, a list of suggested issues to work on in Poetry and poetry-core is available. The full contributing documentation also provides helpful guidance.
Author: Python-poetry
Source Code: https://github.com/python-poetry/poetry
License: MIT license
1677075540
This is a runtime library for TypeScript that contains all of the TypeScript helper functions.
This library is primarily used by the --importHelpers
flag in TypeScript. When using --importHelpers
, a module that uses helper functions like __extends
and __assign
in the following emitted file:
var __assign = (this && this.__assign) || Object.assign || function(t) {
for (var s, i = 1, n = arguments.length; i < n; i++) {
s = arguments[i];
for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p))
t[p] = s[p];
}
return t;
};
exports.x = {};
exports.y = __assign({}, exports.x);
will instead be emitted as something like the following:
var tslib_1 = require("tslib");
exports.x = {};
exports.y = tslib_1.__assign({}, exports.x);
Because this can avoid duplicate declarations of things like __extends
, __assign
, etc., this means delivering users smaller files on average, as well as less runtime overhead. For optimized bundles with TypeScript, you should absolutely consider using tslib
and --importHelpers
.
Installing
For the latest stable version, run:
# TypeScript 3.9.2 or later
npm install tslib
# TypeScript 3.8.4 or earlier
npm install tslib@^1
# TypeScript 2.3.2 or earlier
npm install tslib@1.6.1
# TypeScript 3.9.2 or later
yarn add tslib
# TypeScript 3.8.4 or earlier
yarn add tslib@^1
# TypeScript 2.3.2 or earlier
yarn add tslib@1.6.1
# TypeScript 3.9.2 or later
bower install tslib
# TypeScript 3.8.4 or earlier
bower install tslib@^1
# TypeScript 2.3.2 or earlier
bower install tslib@1.6.1
# TypeScript 3.9.2 or later
jspm install tslib
# TypeScript 3.8.4 or earlier
jspm install tslib@^1
# TypeScript 2.3.2 or earlier
jspm install tslib@1.6.1
Usage
Set the importHelpers
compiler option on the command line:
tsc --importHelpers file.ts
or in your tsconfig.json:
{
"compilerOptions": {
"importHelpers": true
}
}
You will need to add a paths
mapping for tslib
, e.g. For Bower users:
{
"compilerOptions": {
"module": "amd",
"importHelpers": true,
"baseUrl": "./",
"paths": {
"tslib" : ["bower_components/tslib/tslib.d.ts"]
}
}
}
For JSPM users:
{
"compilerOptions": {
"module": "system",
"importHelpers": true,
"baseUrl": "./",
"paths": {
"tslib" : ["jspm_packages/npm/tslib@2.x.y/tslib.d.ts"]
}
}
}
package.json
and bower.json
git tag [version]
git push --tags
Done.
Contribute
There are many ways to contribute to TypeScript.
Documentation
Author: Microsoft
Source Code: https://github.com/microsoft/tslib
License: 0BSD license
1676696640
Rome makes it easy to build a list of frameworks for consumption outside of Xcode, e.g. for a Swift script.
$ gem install cocoapods-rome
In the examples below the target 'caesar' could either be an existing target of a project managed by cocapods for which you'd like to run a swift script or it could be fictitious, for example if you wish to run this on a standalone Podfile and get the frameworks you need for adding to your xcode project manually.
Write a simple Podfile, like this:
platform :osx, '10.10'
plugin 'cocoapods-rome'
target 'caesar' do
pod 'Alamofire'
end
platform :ios, '8.0'
plugin 'cocoapods-rome', { :pre_compile => Proc.new { |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['SWIFT_VERSION'] = '4.0'
end
end
installer.pods_project.save
},
dsym: false,
configuration: 'Release'
}
target 'caesar' do
pod 'Alamofire'
end
then run this:
pod install
and you will end up with dynamic frameworks:
$ tree Rome/
Rome/
└── Alamofire.framework
For your production builds, when you want dSYMs created and stored:
platform :osx, '10.10'
plugin 'cocoapods-rome', {
dsym: true,
configuration: 'Release'
}
target 'caesar' do
pod 'Alamofire'
end
Resulting in:
$ tree dSYM/
dSYM/
├── iphoneos
│ └── Alamofire.framework.dSYM
│ └── Contents
│ ├── Info.plist
│ └── Resources
│ └── DWARF
│ └── Alamofire
└── iphonesimulator
└── Alamofire.framework.dSYM
└── Contents
├── Info.plist
└── Resources
└── DWARF
└── Alamofire
The plugin allows you to provides hooks that will be called during the installation process.
pre_compile
This hook allows you to make any last changes to the generated Xcode project before the compilation of frameworks begins.
It receives the Pod::Installer
as its only argument.
post_compile
This hook allows you to run code after the compilation of the frameworks finished and they have been moved to the Rome
folder.
It receives the Pod::Installer
as its only argument.
Customising the Swift version of all pods
platform :osx, '10.10'
plugin 'cocoapods-rome',
:pre_compile => Proc.new { |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['SWIFT_VERSION'] = '4.0'
end
end
installer.pods_project.save
},
:post_compile => Proc.new { |installer|
puts "Rome finished building all the frameworks"
}
target 'caesar' do
pod 'Alamofire'
end
Author: CocoaPods
Source Code: https://github.com/CocoaPods/Rome
License: MIT license
1667090940
This is the tool, that can use .o(object) files to generate dependency graph.
All visualisations was done by d3js library, which is just awesome!
This tool was made just for fun, but images can show how big your project is, how many classes it have, and how they linked to each other
This will clone project, and run it on the latest modified project
git clone https://github.com/PaulTaykalo/objc-dependency-visualizer.git ;
cd objc-dependency-visualizer ;
./generate-objc-dependencies-to-json.rb -d -s "" > origin.js ;
open index.html
git clone https://github.com/PaulTaykalo/objc-dependency-visualizer.git ;
cd objc-dependency-visualizer ;
./generate-objc-dependencies-to-json.rb -w -s "" > origin.js ;
open index.html
Examples are here
Share image to the Twitter with #objcdependencyvisualizer hashtag
Here's detailed description of what's going on under the hood
Author: PaulTaykalo
Source Code: https://github.com/PaulTaykalo/objc-dependency-visualizer
License: MIT license
1665625680
Composer helps you declare, manage, and install dependencies of PHP projects.
See https://getcomposer.org/ for more information and documentation.
Download and install Composer by following the official instructions.
For usage, see the documentation.
Find public packages on Packagist.org.
For private package hosting take a look at Private Packagist.
Follow @packagist or @seldaek on Twitter for announcements, or check the #composerphp hashtag.
For support, Stack Overflow offers a good collection of Composer related questions, or you can use the GitHub discussions.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project and its community you agree to abide by those terms.
PHP 7.2.5 or above for the latest version.
PHP versions 5.3.2 - 8.1 are still supported via the LTS releases of Composer (2.2.x). If you run the installer or the self-update
command the appropriate Composer version for your PHP should be automatically selected.
See also the list of contributors who participated in this project.
Please send any sensitive issue to security@packagist.org. Thanks!
Author: Composer
Source Code: https://github.com/composer/composer
License: MIT license
1665483559
Wire is a code generation tool that automates connecting components using dependency injection. Dependencies between components are represented in Wire as function parameters, encouraging explicit initialization instead of global variables. Because Wire operates without runtime state or reflection, code written to be used with Wire is useful even for hand-written initialization.
For an overview, see the introductory blog post.
Install Wire by running:
go install github.com/google/wire/cmd/wire@latest
and ensuring that $GOPATH/bin
is added to your $PATH
.
As of version v0.3.0, Wire is beta and is considered feature complete. It works well for the tasks it was designed to perform, and we prefer to keep it as simple as possible.
We'll not be accepting new features at this time, but will gladly accept bug reports and fixes.
For questions, please use GitHub Discussions.
This project is covered by the Go Code of Conduct.
Author: Google
Source Code: https://github.com/google/wire
License: Apache-2.0 license
1663659927
In today's post we will learn about 10 Best PHP Libraries for Extras Related to Dependency Management.
What is Dependency Management?
Projects in all professional services industries often require a cooperation of individuals or services with specialist skills in order to complete the work and deliver it to a client. Dependency management is a technique, a set of actions to perform in order to properly plan, manage and conduct a project between different shared services, or specialists whose availability needs to be ensured for successful completion of the work.
Table of contents:
composer.json
files.composer.json
files.A library to parse your project's Composer environment at runtime.
This library provides a set of utility functions designed to help you parse your project's Composer configuration, and those of its dependencies, at runtime.
The API combines functional and object-oriented approaches.
(Chicken and egg...)
$absoluteVendorPath = Composed\VENDOR_DIR;
$absoluteProjectPath = Composed\BASE_DIR;
You can fetch data from the composer.json
file of a specific package.
$authors = Composed\package_config('phpunit/phpunit', 'authors');
assert($authors === [
[
'name' => "Sebastian Bergmann",
'email' => "sebastian@phpunit.de",
'role' => "lead",
],
]);
You can fetch data from all composer.json
files in your project in one go.
$licenses = Composed\package_configs('license');
assert($licenses === [
'joshdifabio/composed' => "MIT",
'doctrine/instantiator' => "MIT",
'phpunit/php-code-coverage' => "BSD-3-Clause",
]);
$path = Composed\package('phpunit/phpunit')->getPath('composer.json');
foreach (Composed\packages() as $packageName => $package) {
$pathToPackageConfig = $package->getPath('composer.json');
// ...
}
You can also fetch data from the composer.json
file located in your project root.
$projectAuthors = Composed\project_config('authors');
assert($projectAuthors === [
[
'name' => 'Josh Di Fabio',
'email' => 'joshdifabio@somewhere.com',
],
]);
Install Composed using composer.
composer require joshdifabio/composed
A composer plugin to merge several composer.json
files.
Merge multiple composer.json files at Composer runtime.
Composer Merge Plugin is intended to allow easier dependency management for applications which ship a composer.json file and expect some deployments to install additional Composer managed libraries. It does this by allowing the application's top level composer.json
file to provide a list of optional additional configuration files. When Composer is run it will parse these files and merge their configuration settings into the base configuration. This combined configuration will then be used when downloading additional libraries and generating the autoloader.
Composer Merge Plugin was created to help with installation of MediaWiki which has core library requirements as well as optional libraries and extensions which may be managed via Composer.
Composer Merge Plugin 1.4.x (and older) requires Composer 1.x.
Composer Merge Plugin 2.0.x (and newer) is compatible with both Composer 2.x and 1.x.
$ composer require wikimedia/composer-merge-plugin
{
"require": {
"wikimedia/composer-merge-plugin": "dev-master"
},
"extra": {
"merge-plugin": {
"include": [
"composer.local.json",
"extensions/*/composer.json"
],
"require": [
"submodule/composer.json"
],
"recurse": true,
"replace": false,
"ignore-duplicates": false,
"merge-dev": true,
"merge-extra": false,
"merge-extra-deep": false,
"merge-replace": true,
"merge-scripts": false
}
}
}
composer.json
filesIn order for Composer Merge Plugin to install dependencies from updated or newly created sub-level composer.json
files in your project you need to run the command:
$ composer update
This will instruct Composer to recalculate the file hash for the top-level composer.json
thus triggering Composer Merge Plugin to look for the sub-level configuration files and update your dependencies.
A plugin for normalising composer.json
files.
When it comes to formatting composer.json
, you have the following options:
ergebnis/composer-normalize
ergebnis/composer-normalize
normalizes composer.json
, so you don't have to.
💡 If you want to find out more, take a look at the examples and read this blog post.
Run
composer require --dev ergebnis/composer-normalize
to install ergebnis/composer-normalize
as a composer plugin.
Run
composer config allow-plugins.ergebnis/composer-normalize true
to allow ergebnis/composer-normalize
to run as a composer plugin.
💡 The allow-plugins
has been added to composer/composer
to add an extra layer of security.
For reference, see
Head over to http://github.com/ergebnis/composer-normalize/releases/latest and download the latest composer-normalize.phar
.
Run
chmod +x composer-normalize.phar
to make the downloaded composer-normalize.phar
executable.
Run
phive install ergebnis/composer-normalize
to install ergebnis/composer-normalize
with PHIVE.
Run
composer normalize
to normalize composer.json
in the working directory.
Run
./composer-normalize.phar
to normalize composer.json
in the working directory.
Run
./tools/composer-normalize
to normalize composer.json
in the working directory.
A plugin for Composer to apply patches.
Simple patches plugin for Composer. Applies a patch from a local or remote file to any package required with composer.
Before you begin, make sure that patch
is installed on your system.
Example composer.json:
{
"require": {
"cweagans/composer-patches": "~1.0",
"drupal/core-recommended": "^8.8",
},
"config": {
"preferred-install": "source"
},
"extra": {
"patches": {
"drupal/core": {
"Add startup configuration for PHP server": "https://www.drupal.org/files/issues/add_a_startup-1543858-30.patch"
}
}
}
}
Instead of a patches key in your root composer.json, use a patches-file key.
{
"require": {
"cweagans/composer-patches": "~1.0",
"drupal/core-recommended": "^8.8",
},
"config": {
"preferred-install": "source"
},
"extra": {
"patches-file": "local/path/to/your/composer.patches.json"
}
}
Then your composer.patches.json
should look like this:
{
"patches": {
"vendor/project": {
"Patch title": "http://example.com/url/to/patch.patch"
}
}
}
If you want your project to accept patches from dependencies, you must have the following in your composer file:
{
"require": {
"cweagans/composer-patches": "^1.5.0"
},
"extra": {
"enable-patching": true
}
}
CLI tool to analyze composer dependencies and verify that no unknown symbols are used in the sources of a package.
A CLI tool to analyze composer dependencies and verify that no unknown symbols are used in the sources of a package. This will prevent you from using "soft" dependencies that are not defined within your composer.json
require section.
"Soft" (or transitive) dependencies are code that you did not explicitly define to be there but use it nonetheless. The opposite is a "hard" (or direct) dependency.
Your code most certainly uses external dependencies. Imagine that you found a library to access a remote API. You require thatvendor/api-lib
for your software and use it in your code. This library is a hard dependency.
Then you see that another remote API is available, but no library exists. The use case is simple, so you look around and find that guzzlehttp/guzzle
(or any other HTTP client library) is already installed, and you use it right away to fetch some info. Guzzle just became a soft dependency.
Then someday, when you update your dependencies, your access to the second API breaks. Why? Turns out that the reason guzzlehttp/guzzle
was installed is that it is a dependency of thatvendor/api-lib
you included, and their developers decided to update from an earlier major version to the latest and greatest, simply stating in their changelog: "Version 3.1.0 uses the latest major version of Guzzle - no breaking changes expected."
And you think: What about my broken code?
ComposerRequireChecker parses your code and your composer.json-file to see whether your code uses symbols that are not declared as a required library, i.e. that are soft dependencies. If you rely on components that are already installed but didn't explicitly request them, this tool will complain about them and you should require them explicitly, making them hard dependencies. This will prevent unexpected updates.
In the situation above you wouldn't get the latest update of thatvendor/api-lib
, but your code would continue to work if you also required guzzlehttp/guzzle
before the update.
The tool will also check for usage of PHP functions that are only available if an extension is installed, and will complain if that extension isn't explicitly required.
ComposerRequireChecker is not supposed to be installed as part of your project dependencies.
Please check the releases for available PHAR files. Download the latest release and run it like this:
php composer-require-checker.phar check /path/to/your/project/composer.json
If you already use PHIVE to install and manage your project’s tooling, then you should be able to simply install ComposerRequireChecker like this:
phive install composer-require-checker
This package can be easily globally installed by using Composer:
composer global require maglnet/composer-require-checker
If you haven't already setup your composer installation to support global requirements, please refer to the Composer CLI - global If this is already done, run it like this:
composer-require-checker check /path/to/your/project/composer.json
If your PHP is including Xdebug when running ComposerRequireChecker, you may experience additional issues like exceeding the Xdebug-related max-nesting-level - and on top, Xdebug slows PHP down.
It is recommended to run ComposerRequireChecker without Xdebug.
If you cannot provide a PHP instance without Xdebug yourself, try setting an environment variable like this for just the command: XDEBUG_MODE=off php composer-require-checker
.
A CLI Tool to scan for unused composer packages.
When working in a big repository, you sometimes lose track of your required Composer packages. There may be so many packages you can't be sure if they are actually used or not.
Unfortunately, the composer why
command only gives you the information about why a package is installed in dependency to another package.
How do we check whether the provided symbols of a package are used in our code?
composer unused
to the rescue!
⚠️ This tool heavily depends on certain versions of its dependencies. A local installation of this tool is not recommended as it might not work as intended or can't be installed correctly. We do recommened you download the .phar
archive or use PHIVE to install it locally.
Install via phive
or grab the latest composer-unused.phar
from the latest release:
phive install composer-unused
curl -OL https://github.com/composer-unused/composer-unused/releases/latest/download/composer-unused.phar
You can also install composer-unused
as a local development dependency:
composer require --dev icanhazstring/composer-unused
Depending on the kind of your installation the command might differ.
Note: Packages must be installed via composer install
or composer update
prior to running composer-unused
.
The phar
archive can be run directly in you project:
php composer-unused.phar
Having composer-unused
as a local dependency you can run it using the shipped binary:
vendor/bin/composer-unused
Sometimes you don't want to scan a certain directory or ignore a Composer package while scanning. In these cases, you can provide the --excludeDir
or the --excludePackage
option. These options accept multiple values as shown next:
php composer-unused.phar --excludeDir=config --excludePackage=symfony/console
php composer-unused.phar \
--excludeDir=bin \
--excludeDir=config \
--excludePackage=symfony/assets \
--excludePackage=symfony/console
Make sure the package is named exactly as in your
composer.json
A composer plugin which enables parallel install process.
>=1.0.0 <2.0
>=5.3
, (suggest >=5.5
, because curl_share_init
)$ composer global require hirak/prestissimo
$ composer global remove hirak/prestissimo
288s -> 26s
$ composer create-project laravel/laravel laravel1 --no-progress --profile --prefer-dist
prestissimo ^0.3.x
Recognize composer's options. You don't need to set any special configuration.
To avoid Composer asking for authentication it is recommended to follow the procedure on composer's authentication.
For github.com you could also use an auth.json
file with an oauth access token placed on the the same level as your composer.json
file:
{
"github-oauth": {
"github.com": "YOUR_GITHUB_ACCESS_TOKEN"
}
}
A static Composer repository generator.
Satis requires a recent PHP version, it does not run with unsupported PHP versions. Check the composer.json
file for details.
composer create-project composer/satis:dev-main
php bin/satis build <configuration-file> <output-directory>
Read the more detailed instructions in the documentation.
Pull the image:
docker pull composer/satis
Run the image (with Composer cache from host):
docker run --rm --init -it \
--user $(id -u):$(id -g) \
--volume $(pwd):/build \
--volume "${COMPOSER_HOME:-$HOME/.composer}:/composer" \
composer/satis build <configuration-file> <output-directory>
If you want to run the image without implicitly running Satis, you have to override the entrypoint specified in the Dockerfile
:
--entrypoint /bin/sh
If you choose to archive packages as part of your build, over time you can be left with useless files. With the purge
command, you can delete these files.
php bin/satis purge <configuration-file> <output-dir>
Note: don't do this unless you are certain your projects no longer reference any of these archives in their
composer.lock
files.
A library to manage PHAR files in project using Composer.
With tooly composer-script you can version needed PHAR files in your project's composer.json without adding them directly to a VCS,
Every PHAR file will be saved in the composer binary directory.
An real example can be found here.
To use the script execute the following command:
composer require --dev tm/tooly-composer-script
Then add the script in the composer.json under "scripts" with the event names you want to trigger. For example:
...
"scripts": {
"post-install-cmd": "Tooly\\ScriptHandler::installPharTools",
"post-update-cmd": "Tooly\\ScriptHandler::installPharTools"
},
...
Look here for more informations about composer events.
The composer.json scheme has a part "extra" which is used for the script. Its described here.
In this part you can add your needed phar tools under the key "tools".
...
"extra": {
...
"tools": {
"phpunit": {
"url": "https://phar.phpunit.de/phpunit-5.5.0.phar",
"sign-url": "https://phar.phpunit.de/phpunit-5.5.0.phar.asc"
},
"phpcpd": {
"url": "https://phar.phpunit.de/phpcpd-2.0.4.phar",
"only-dev": true,
"rename": true
},
"security-checker": {
"url": "http://get.sensiolabs.org/security-checker.phar",
"force-replace": true
},
}
...
}
...
After you add the name of the tool as key, you need only one further parameter. The "url". The url can be a link to a specific version, such as x.y.z, or a link to the latest version for this phar.
Rename the downloaded tool to the name that is used as key.
If this parameter is set tooly checks if the PHAR file in url has a valid signature by comparing signature in sign-url.
This option is useful if you want to be sure that the tool is from the expected author.
Note: For the check you need a further requirement and a GPG binary in your $PATH variable.
You can add the requirement with this command: composer require tm/gpg-verifier
This check often fails if you dont has the public key from the tool author in your GPG keychain.
Thank you for following this article.
The Most Popular PHP Frameworks to Use in 2022
1663417920
In today's post we will learn about 10 Popular Go Libraries for Package & Dependency Management.
What is dependency management?
Dependency management is the process of managing all of these interrelated tasks and resources to ensure that your overall project completes successfully, on time, and on budget. When there are dependencies that need to be managed between projects, it's referred to as project interdependency management.
Table of contents:
Manage your golang vendor and vendored packages with ease. Inspired by tools like Maven, Bundler, and Pip.
Are you used to tools such as Cargo, npm, Composer, Nuget, Pip, Maven, Bundler, or other modern package managers? If so, Glide is the comparable Go tool.
Manage your vendor and vendored packages with ease. Glide is a tool for managing the vendor
directory within a Go package. This feature, first introduced in Go 1.5, allows each package to have a vendor
directory containing dependent packages for the project. These vendor packages can be installed by a tool (e.g. glide), similar to go get
or they can be vendored and distributed with the package.
The Go community is now using Go Modules to handle dependencies. Please consider using that instead of Glide. Glide is now mostly unmaintained.
github.com/Masterminds/semver
package can parse can be used.go
toolsGlide scans the source code of your application or library to determine the needed dependencies. To determine the versions and locations (such as aliases for forks) Glide reads a glide.yaml
file with the rules. With this information Glide retrieves needed dependencies.
When a dependent package is encountered its imports are scanned to determine dependencies of dependencies (transitive dependencies). If the dependent project contains a glide.yaml
file that information is used to help determine the dependency rules when fetching from a location or version to use. Configuration from Godep, GB, GOM, and GPM is also imported.
The dependencies are exported to the vendor/
directory where the go
tools can find and use them. A glide.lock
file is generated containing all the dependencies, including transitive ones.
The glide init
command can be use to setup a new project, glide update
regenerates the dependency versions using scanning and rules, and glide install
will install the versions listed in the glide.lock
file, skipping scanning, unless the glide.lock
file is not found in which case it will perform an update.
A project is structured like this:
- $GOPATH/src/myProject (Your project)
|
|-- glide.yaml
|
|-- glide.lock
|
|-- main.go (Your main go code can live here)
|
|-- mySubpackage (You can create your own subpackages, too)
| |
| |-- foo.go
|
|-- vendor
|-- github.com
|
|-- Masterminds
|
|-- ... etc.
Take a look at the Glide source code to see this philosophy in action.
The easiest way to install the latest release on Mac or Linux is with the following script:
curl https://glide.sh/get | sh
On Mac OS X you can also install the latest release via Homebrew:
$ brew install glide
On Ubuntu Precise (12.04), Trusty (14.04), Wily (15.10) or Xenial (16.04) you can install from our PPA:
sudo add-apt-repository ppa:masterminds/glide && sudo apt-get update
sudo apt-get install glide
On Ubuntu Zesty (17.04) the package is called golang-glide
.
Binary packages are available for Mac, Linux and Windows.
For a development version it is also possible to go get github.com/Masterminds/glide
.
To build from source you can:
$GOPATH/src/github.com/Masterminds/glide
and change directory into itexport GO15VENDOREXPERIMENT=1
. In Go 1.6 it is enabled by default and in Go 1.7 it is always enabled without the ability to turn it off.make build
This will leave you with ./glide
, which you can put in your $PATH
if you'd like. (You can also take a look at make install
to install for you.)
The Glide repo has now been configured to use glide to manage itself, too.
Dependency tool for go, godep helps build packages reproducibly by fixing their dependencies.
The Go community now has the dep project to manage dependencies. Please consider trying to migrate from Godep to dep. If there is an issue preventing you from migrating please file an issue with dep so the problem can be corrected. Godep will continue to be supported for some time but is considered to be in a state of support rather than active feature development.
go get github.com/tools/godep
Assuming you've got everything working already, so you can build your project with go install
and test it with go test
, it's one command to start using:
godep save
This will save a list of dependencies to the file Godeps/Godeps.json
and copy their source code into vendor/
(or Godeps/_workspace/
when using older versions of Go). Godep does not copy:
*_test.go
files.testdata
directories.Godep does not process the imports of .go
files with either the ignore
or appengine
build tags.
Test files and testdata directories can be saved by adding -t
.
Read over the contents of vendor/
and make sure it looks reasonable. Then commit the Godeps/
and vendor/
directories to version control.
-r
flagFor older versions of Go, the -r
flag tells save to automatically rewrite package import paths. This allows your code to refer directly to the copied dependencies in Godeps/_workspace
. So, a package C that depends on package D will actually import C/Godeps/_workspace/src/D
. This makes C's repo self-contained and causes go get
to build C with the right version of all dependencies.
If you don't use -r
, when using older version of Go, then in order to use the fixed dependencies and get reproducible builds, you must make sure that every time you run a Go-related command, you wrap it in one of these two ways:
go
, run it as godep go ...
, e.g. godep go install -v ./...
$GOPATH
using godep path
as described below.-r
isn't necessary with go1.6+ and isn't allowed.
Go Manager - bundle for go.
The go get
command is useful. But we want to fix the problem where package versions are different from the latest update. Are you going to do go get -tags=1.1 ...
, go get -tag=0.3
for each of them? We want to freeze package version. Ruby's bundle is awesome.
go get github.com/mattn/gom
gom 'github.com/mattn/go-runewidth', :tag => 'go1'
gom 'github.com/mattn/go-scan', :commit => 'ecb144fb1f2848a24ebfdadf8e64380406d87206'
gom 'github.com/daviddengcn/go-colortext'
gom 'github.com/mattn/go-ole', :goos => 'windows'
# Execute only in the "test" environment.
group :test do
gom 'github.com/mattn/go-sqlite3'
end
# Execute only for the "custom_group" group.
group :custom_group do
gom 'github.com/golang/lint/golint'
end
By default gom install
install all packages, except those in the listed groups. You can install packages from groups based on the environment using flags (development
, test
& production
) : gom -test install
Custom groups my be specified using the -groups flag : gom -test -groups=custom_group,special install
Create _vendor directory and bundle packages into it
gom install
Build on current directory with _vendor packages
gom build
Run tests on current directory with _vendor packages
gom test
Generate .travis.yml that uses gom test
gom gen travis-yml
You can always change the name relative to the current $GOPATH
directory using an environment variable: GOM_VENDOR_NAME
$ # to use a regular $GOPATH/src folder you should specify GOM_VENDOR_NAME equal '.'
$ GOM_VENDOR_NAME=. gom <command>
Simple dependency manager for Go (golang), inspired by Bundler.
A dependency manager for Go (golang), inspired by Bundler. It is different from other dependency managers in that it does not force you to mess with your GOPATH
.
Install Goop: go get github.com/nitrous-io/goop
Create Goopfile
. Revision reference (e.g. Git SHA hash) is optional, but recommended. Prefix hash with #
. (This is to futureproof the file format.)
Example:
github.com/mattn/go-sqlite3
github.com/gorilla/context #14f550f51af52180c2eefed15e5fd18d63c0a64a
github.com/dotcloud/docker/pkg/proxy #v1.0.1 // comment
github.com/gorilla/mux !git@github.com:nitrous-io/mux.git // override repo url
Run goop install
. This will install packages inside a subdirectory called .vendor
and create Goopfile.lock
, recording exact versions used for each package and its dependencies. Subsequent goop install
runs will ignore Goopfile
and install the versions specified in Goopfile.lock
. You should check this file in to your source version control. It's a good idea to add .vendor
to your version control system's ignore settings (e.g. .gitignore
).
Run commands using goop exec
(e.g. goop exec make
). This will execute your command in an environment that has correct GOPATH
and PATH
set.
Go commands can be run without the exec
keyword (e.g. goop go test
).
Run goop update
to ignore an existing Goopfile.lock
, and update to latest versions of packages (as specified in Goopfile
).
Running eval $(goop env)
will modify GOPATH
and PATH
in current shell session, allowing you to run commands without goop exec
.
Goop currently only supports Git and Mercurial. This should be fine for 99% of the cases, but you are more than welcome to make a pull request that adds support for Subversion and Bazaar.
Build and manage your Go applications out of GOPATH.
GOP is a project manangement tool for building your golang applications out of global GOPATH. In fact gop will keep both global GOPATH and every project GOPATH. But that means your project will not go-getable. Of course, GOP itself is go-getable. GOP copy all denpendencies from global GOPATH to your project's src/vendor
directory and all application's sources are also in src
directory.
A normal process using gop is below:
git clone xxx@mydata.com:bac/aaa.git
cd aaa
gop ensure -g
gop build
gop test
Please ensure you have installed the go
command, GOP will invoke it on building or testing
go get github.com/lunny/gop
Every project should have a GOPATH directory structure and put a gop.yml
int the root directory. This is an example project's directory tree.
<project root>
├── gop.yml
├── bin
├── doc
└── src
├── main
│ └── main.go
├── models
│ └── models.go
├── routes
│ └── routes.go
└── vendor
└── github.com
├── go-xorm
│ ├── builder
│ ├── core
│ └── xorm
└── lunny
├── log
└── tango
Go Package Manager.
Gopm (Go Package Manager) is a Go package manage and build tool for Go.
go get -u github.com/gpmgo/gopm
The executable will be produced under $GOPATH/bin
in your file system; for global use purpose, we recommend you to add this path into your PATH
environment variable.
git
or hg
in order to download packages.gopm build
or gopm install
, everything just happens in its own GOPATH and does not bother anything you've done (unless you told it to)..gopmfile
).NAME:
Gopm - Go Package Manager
USAGE:
Gopm [global options] command [command options] [arguments...]
COMMANDS:
list list all dependencies of current project
gen generate a gopmfile for current Go project
get fetch remote package(s) and dependencies
bin download and link dependencies and build binary
config configure gopm settings
run link dependencies and go run
test link dependencies and go test
build link dependencies and go build
install link dependencies and go install
clean clean all temporary files
update check and update gopm resources including itself
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--noterm, -n disable color output
--strict, -s strict mode
--debug, -d debug mode
--help, -h show help
--version, -v print the version
Plugin that provides way for auto-loading of Golang SDK, dependency management and start build environment in Maven project infrastructure.
GO start!
Taste Go in just two commands!
mvn archetype:generate -B -DarchetypeGroupId=com.igormaznitsa -DarchetypeArtifactId=mvn-golang-hello -DarchetypeVersion=2.3.10 -DgroupId=com.go.test -DartifactId=gohello -Dversion=1.0-SNAPSHOT
mvn -f ./gohello/pom.xml package
The First command in th snippet above generates a maven project with some test files and the second command builds the project. Also you can take a look at the example Hello world
project using the plugin
If you want to generate a multi-module project, then you can use such snippet
mvn archetype:generate -B -DarchetypeGroupId=com.igormaznitsa -DarchetypeArtifactId=mvn-golang-hello-multi -DarchetypeVersion=2.3.10 -DgroupId=com.go.test -DartifactId=gohello-multi -Dversion=1.0-SNAPSHOT
Introduction
The Plug-in just wraps Golang tool-chain and allows to use strong maven based infrastructure to build Golang projects. It also can automatically download needed Golang SDK from the main server and tune needed version of packets for their branch, tag or revisions. Because a Golang project in the case is formed as just maven project, it is possible to work with it in any Java IDE which supports Maven.
How it works
On start the plug-in makes below steps:
bin/go
with defined command, the source folder will be set as current folderHow to build
Because it is maven plugin, to build the plugin just use
mvn clean install -Pplugin
To save time, examples excluded from the main build process and activated through special profile
mvn clean install -Pexamples
Barebones dependency manager for Go.
Go Package Manager (or gpm, for short) is a tool that helps achieve reproducible builds for Go applications by specifying the revision of each external Go package that the application depends on.
Being simple and unobstrusive are some of the most important design choices for gpm: go get
already provides a way to fetch dependencies, and relies on versions control systems like Git to do it, gpm adds the additional step of setting each dependency repo to the desired revision, neither Go or your application even know about any of this happening, it just works.
To achieve this, gpm uses a manifest file which is assumed to be called Godeps
(although you can name it however you want), running gpm fetches all dependencies and ensures each is set to a specified version, down to revision level.
For a given project, running gpm
in the directory containing the Godeps
file is enough to make sure dependencies in the file are fetched and set to the correct revision.
However, if you share your GOPATH
with other projects running gpm each time can get old, my solution for that is to isolate dependencies by manipulating the GOPATH
, see the workspaces section for details.
You can see gpm in action under this workflow in the following gif:
$ brew install gpm
$ yaourt -S go-gpm
or
$ packer -S go-gpm
Caveat: you'll use go-gpm
instead of just gpm
in the command line, as there is a general purpose linux package under that name already.
Minimal dependency version using Git.
Johnny Deps is a small tool from VividCortex that provides minimalistic dependency versioning for Go repositories using Git. Its primary purpose is to help create reproducible builds when many import paths in various repositories are required to build an application. It's based on a Perl script that provides subcommands for retrieving or building a project, or updating a dependencies file (called Godeps
), listing first-level imports for a project.
Install Johnny Deps by cloning the project's Github repository and running the provided scripts, like this:
git clone https://github.com/VividCortex/johnny-deps.git
cd johnny-deps
./configure --prefix=/your/path
make install
The --prefix
option to configure
is not mandatory; it defaults to /usr/local
if not provided (but you'd have to install as root in that case). The binary will end up at the bin
subdirectory, under the prefix you choose; make sure you have that location specified in your PATH
.
Note that Perl is required, although that's probably provided by your system already. Also Go, Git and (if you're using makefiles) Make.
Johnny Deps is all about project dependencies. Each project should have a file called Godeps at its root, listing the full set of first-level dependencies; i.e., all repositories with Go packages imported directly by this project. The file may be omitted when empty and looks like this:
github.com/VividCortex/godaemon 2fdf3f9fa715a998e834f09e07a8070d9046bcfd
github.com/VividCortex/log 1ffbbe58b5cf1bcfd7a80059dd339764cc1e3bff
github.com/VividCortex/mysql f82b14f1073afd7cb41fc8eb52673d78f481922e
The first column identifies the dependency. The second is the commit identifier for the exact revision the current project depends upon. You can use any identifier Git would accept to checkout, including abbreviated commits, tags and branches. Note, however, that the use of branches is discouraged, cause it leads to non-reproducible builds as the tip of the branch moves forward.
jd
is Johnny Deps' main binary. It's a command line tool to retrieve projects from Github, check dependencies, reposition local working copies according to each project's settings, building and updating. It accepts subcommands much like go
or git
do:
jd [global-options] [command] [options] [project]
Global options apply to all commands. Some allow you to change the external tools that are used (go, git and make) in case you don't have them in your path, or otherwise want to use a different version. There's also a -v
option to increase verbosity, that you can provide twice for extra effect. (Note that the tool runs silently by default, only displaying errors, if any.)
It's worth noting that all parameters are optional. If you don't specify a command, it will default to build
(see "Building" below). If you don't specify a project, jd
will try to infer the project based on your current working path, and your setting for GOPATH
. If you're in a subdirectory of any of the GOPATH
components, and you're also in a Git working tree, jd
would be happy to fill up the project for you.
When in doubt, check jd help
.
Converts 'go mod graph' output into Graphviz's DOT language.
Converts 'go mod graph' output into GraphViz's DOT language.
go mod graph | modgv | dot -Tpng -o graph.png
For each module:
go get github.com/lucasepe/modgv/modgv
Here 👉 https://graphviz.gitlab.io/download/ how to install GraphViz for your OS.
go mod graph | modgv | dot -Tpng -o graph.png
go mod graph | modgv | dot -Tps2 -o graph.ps
ps2pdf graph.ps graph.pdf
Thank you for following this article.
Learning Golang: Dependencies, Modules and How to manage Packages
1663357620
In today's post we will learn about 10 Best Golang Libraries for Working with Dependency injection.
What is Dependency injection?
Dependency injection is basically providing the objects that an object needs (its dependencies) instead of having it construct them itself.
Table of contents:
Alice is an additive dependency injection container for Golang.
Design philosophy behind Alice:
$ go get github.com/magic003/alice
Alice is inspired by the design of Spring JavaConfig.
It usually takes 3 steps to use Alice.
The instances to be managed by the container are defined in modules. There could be multiple modules organized by the functionality of the instances. Modules are usually placed in a separate package.
A typical module looks like this:
type ExampleModule struct {
alice.BaseModule
Foo Foo `alice:""`
Bar Bar `alice:"Bar"`
Baz Baz
}
func (m *ExampleModule) InstanceX() X {
return X{m.Foo}
}
func (m *ExampleModule) InstanceY() Y {
return Y{m.Baz}
}
A module struct must embed the alice.BaseModule
struct. It allows 3 types of fields:
alice:""
. It will be associated with the same or assignable type of instance defined in other modules.alice:"Bar"
. It will be associated with the instance named Bar
defined in other modules.alice
tag. It will not be associated with any instance defined in other modules. It is expected to be provided when initializing the module. It is not managed by the container and could not be retrieved.It is also common that no field is defined in a module struct.
Any public method of the module struct defines one instance to be intialized and maintained by the container. It is required to use a pointer receiver. The method name will be used as the instance name. The return type will be used as the instance type. Inside the method, it could use any field of the module struct to create new instances.
Dependency injection for Go programming language.
Dependency injection is one form of the broader technique of inversion of control. It is used to increase modularity of the program and make it extensible.
This library helps you to organize responsibilities in your codebase and make it easy to combine low-level implementation into high-level behavior without boilerplate.
go get github.com/goava/di
package main
import (
"context"
"fmt"
"log"
"net/http"
"os"
"os/signal"
"syscall"
"github.com/goava/di"
)
func main() {
di.SetTracer(&di.StdTracer{})
// create container
c, err := di.New(
di.Provide(NewContext), // provide application context
di.Provide(NewServer), // provide http server
di.Provide(NewServeMux), // provide http serve mux
// controllers as []Controller group
di.Provide(NewOrderController, di.As(new(Controller))),
di.Provide(NewUserController, di.As(new(Controller))),
)
// handle container errors
if err != nil {
log.Fatal(err)
}
// invoke function
if err := c.Invoke(StartServer); err != nil {
log.Fatal(err)
}
}
Full code available here.
If you have any questions, feel free to create an issue.
We recommend consuming SemVer major version 1
using your dependency manager of choice.
$ glide get 'go.uber.org/dig#^1'
$ dep ensure -add "go.uber.org/dig@v1"
$ go get 'go.uber.org/dig@v1'
This library is v1
and follows SemVer strictly.
No breaking changes will be made to exported APIs before v2.0.0
.
Dingo works very similar to Guice
Basically one binds implementations/factories to interfaces, which are then resolved by Dingo.
Given that Dingo's idea is based on Guice we use similar examples in this documentation:
The following example shows a BillingService with two injected dependencies. Please note that Go's nature does not allow constructors, and does not allow decorations/annotations beside struct-tags, thus, we only use struct tags (and later arguments for providers).
Also, Go does not have a way to reference types (like Java's Something.class
) we use either pointers or nil
and cast it to a pointer to the interface we want to specify: (*Something)(nil)
. Dingo then knows how to dereference it properly and derive the correct type Something
. This is not necessary for structs, where we can just use the null value via Something{}
.
See the example folder for a complete example.
package example
type BillingService struct {
processor CreditCardProcessor
transactionLog TransactionLog
}
func (billingservice *BillingService) Inject(processor CreditCardProcessor, transactionLog TransactionLog) {
billingservice.processor = processor
billingservice.transactionLog = transactionLog
}
func (billingservice *BillingService) ChargeOrder(order PizzaOrder, creditCard CreditCard) Receipt {
// ...
}
We want the BillingService to get certain dependencies, and configure this in a BillingModule
which implements dingo.Module
:
package example
type BillingModule struct {}
func (module *BillingModule) Configure(injector *dingo.Injector) {
// This tells Dingo that whenever it sees a dependency on a TransactionLog,
// it should satisfy the dependency using a DatabaseTransactionLog.
injector.Bind(new(TransactionLog)).To(DatabaseTransactionLog{})
// Similarly, this binding tells Dingo that when CreditCardProcessor is used in
// a dependency, that should be satisfied with a PaypalCreditCardProcessor.
injector.Bind(new(CreditCardProcessor)).To(PaypalCreditCardProcessor{})
}
Every instance that is created through the container can use injection.
Dingo supports two ways of requesting dependencies that should be injected:
For every requested injection (unless an exception applies) Dingo does the following:
A dependency injection toolkit based on Go 1.18+ Generics.
This library implements the Dependency Injection design pattern. It may replace the uber/dig
fantastic package in simple Go projects. samber/do
uses Go 1.18+ generics instead of reflection and therefore is typesafe.
Why this name?
I love short name for such utility library. This name is the sum of DI
and Go
and no Go package currently uses this name.
🚀 Services are loaded in invocation order.
🕵️ Service health can be checked individually or globally. Services implementing do.Healthcheckable
interface will be called via do.HealthCheck[type]()
or injector.HealthCheck()
.
🛑 Services can be shutdowned properly, in back-initialization order. Services implementing do.Shutdownable
interface will be called via do.Shutdown[type]()
or injector.Shutdown()
.
go get github.com/samber/do@v1
This library is v1 and follows SemVer strictly.
No breaking changes will be made to exported APIs before v2.0.0.
You can import do
using:
import (
"github.com/samber/do"
)
Then instanciate services:
func main() {
injector := do.New()
// provides CarService
do.Provide(injector, NewCarService)
// provides EngineService
do.Provide(injector, NewEngineService)
car := do.MustInvoke[*CarService](injector)
car.Start()
// prints "car starting"
do.HealthCheck[EngineService](injector)
// returns "engine broken"
// injector.ShutdownOnSIGTERM() // will block until receiving sigterm signal
injector.Shutdown()
// prints "car stopped"
}
Services:
type EngineService interface{}
func NewEngineService(i *do.Injector) (EngineService, error) {
return &engineServiceImplem{}, nil
}
type engineServiceImplem struct {}
// [Optional] Implements do.Healthcheckable.
func (c *engineServiceImplem) HealthCheck() error {
return fmt.Errorf("engine broken")
}
func NewCarService(i *do.Injector) (*CarService, error) {
engine := do.MustInvoke[EngineService](i)
car := CarService{Engine: engine}
return &car, nil
}
type CarService struct {
Engine EngineService
}
func (c *CarService) Start() {
println("car starting")
}
// [Optional] Implements do.Shutdownable.
func (c *CarService) Shutdown() error {
println("car stopped")
return nil
}
An application framework for Go that:
func init()
.We recommend locking to SemVer range ^1
using go mod:
go get go.uber.org/fx@v1
This library is v1
and follows SemVer strictly.
No breaking changes will be made to exported APIs before v2.0.0
.
This project follows the Go Release Policy. Each major version of Go is supported until there are two newer major releases.
gocontainer - Dependency Injection Container
🚏 HOW TO USE
First file main.go
simply gets the repository from the container and prints it we use MustInvoke method to simply present the way where we keep type safety
package main
import (
"github.com/vardius/gocontainer/example/repository"
"github.com/vardius/gocontainer"
)
func main() {
gocontainer.MustInvoke("repository.mysql", func(r Repository) {
fmt.Println(r)
})
}
Our database implementation uses init()
function to register db service
package database
import (
"fmt"
"database/sql"
"github.com/vardius/gocontainer"
)
func NewDatabase() *sql.DB {
db, _ := sql.Open("mysql", "dsn")
return db
}
func init() {
db := gocontainer.MustGet("db")
gocontainer.Register("db", NewDatabase())
}
Our repository accesses earlier on registered db service and following the same patter uses init()
function to register repository service within container
package repository
import (
"fmt"
"database/sql"
"github.com/vardius/gocontainer"
_ "github.com/vardius/gocontainer/example/database"
)
type Repository interface {}
func NewRepository(db *sql.DB) Repository {
return &mysqlRepository{db}
}
type mysqlRepository struct {
db *sql.DB
}
func init() {
db := gocontainer.MustGet("db")
gocontainer.Register("repository.mysql", NewRepository(db.(*sql.DB)))
}
You can disable global container instance by setting gocontainer.GlobalContainer
to nil
. This package allows you to create many containers.
package main
import (
"github.com/vardius/gocontainer/example/repository"
"github.com/vardius/gocontainer"
)
func main() {
// disable global container instance
gocontainer.GlobalContainer = nil
mycontainer := gocontainer.New()
mycontainer.Register("test", 1)
}
Please check GoDoc for more methods and examples.
I've been using Dependency Injection in Java for nearly 10 years via Spring Framework. I'm not saying that one can't live without it, but it's proven to be very useful for large enterprise-level applications. You may argue that Go follows a completely different ideology, values different principles and paradigms than Java, and DI is not needed in this better world. And I can even partly agree with that. And yet I decided to create this light-weight Spring-like library for Go. You are free to not use it, after all 🙂
No, of course not. There's a bunch of libraries around which serve a similar purpose (I even took inspiration from some of them). The problem is that I was missing something in all of these libraries... Therefore I decided to create Yet Another IoC Container that would rule them all. You are more than welcome to use any other library, for example this nice project. And still, I'd recommend stopping by here 😉
It's better to show than to describe. Take a look at this toy-example (error-handling is omitted to minimize code snippets):
services/weather_service.go
package services
import (
"io/ioutil"
"net/http"
)
type WeatherService struct {
}
func (ws *WeatherService) Weather(city string) (*string, error) {
response, err := http.Get("https://wttr.in/" + city)
if err != nil {
return nil, err
}
all, err := ioutil.ReadAll(response.Body)
if err != nil {
return nil, err
}
weather := string(all)
return &weather, nil
}
controllers/weather_controller.go
package controllers
import (
"di-demo/services"
"github.com/goioc/di"
"net/http"
)
type WeatherController struct {
// note that injection works even with unexported fields
weatherService *services.WeatherService `di.inject:"weatherService"`
}
func (wc *WeatherController) Weather(w http.ResponseWriter, r *http.Request) {
weather, _ := wc.weatherService.Weather(r.URL.Query().Get("city"))
_, _ = w.Write([]byte(*weather))
}
GoLobby Container is a lightweight yet powerful IoC (dependency injection) container for Go projects. It's built neat, easy-to-use, and performance-in-mind to be your ultimate requirement.
Features:
It requires Go v1.11
or newer versions.
To install this package, run the following command in your project directory.
go get github.com/golobby/container/v3
GoLobby Container is used to bind abstractions to their implementations. Binding is the process of introducing appropriate concretes (implementations) of abstractions to an IoC container. In this process, you also determine the resolving type, singleton or transient. In singleton bindings, the container provides an instance once and returns it for all the requests. In transient bindings, the container always returns a brand-new instance for each request. After the binding process, you can ask the IoC container to make the appropriate implementation of the abstraction that your code needs. Then your code will depend on abstractions, not implementations!
The following example demonstrates a simple binding and resolving.
// Bind Config interface to JsonConfig struct
err := container.Singleton(func() Config {
return &JsonConfig{...}
})
var c Config
err := container.Resolve(&c)
// `c` will be the instance of JsonConfig
The following snippet expresses singleton binding.
err := container.Singleton(func() Abstraction {
return Implementation
})
// If you might return an error...
err := container.Singleton(func() (Abstraction, error) {
return Implementation, nil
})
It takes a resolver (function) whose return type is the abstraction and the function body returns the concrete (implementation).
Wire is a code generation tool that automates connecting components using dependency injection. Dependencies between components are represented in Wire as function parameters, encouraging explicit initialization instead of global variables. Because Wire operates without runtime state or reflection, code written to be used with Wire is useful even for hand-written initialization.
Install Wire by running:
go install github.com/google/wire/cmd/wire@latest
and ensuring that $GOPATH/bin
is added to your $PATH
.
As of version v0.3.0, Wire is beta and is considered feature complete. It works well for the tasks it was designed to perform, and we prefer to keep it as simple as possible.
We'll not be accepting new features at this time, but will gladly accept bug reports and fixes.
Thank you for following this article.
Dependency Injection Best Practices with the Go Context package
1661415720
A powerful dependency injection micro container
BottleJS is a tiny, powerful dependency injection container. It features lazy loading, middleware hooks, decorators and a clean api inspired by the AngularJS Module API and the simple PHP library Pimple. You'll like BottleJS if you enjoy:
BottleJS supports IE9+ and other ECMAScript 5 compliant browsers.
BottleJS can be used in a browser or in a nodejs app. It can be installed via bower or npm:
$ bower install bottlejs
$ npm install bottlejs
BottleJS is also available on cdnjs:
<script src="https://cdnjs.cloudflare.com/ajax/libs/bottlejs/2.0.1/bottle.min.js"></script>
The simplest recipe to get started with is Bottle#service
. Say you have a constructor for a service object:
var Beer = function() { /* A beer service, :yum: */ };
You can register the constructor with Bottle#service
:
var bottle = new Bottle();
bottle.service('Beer', Beer);
Later, when you need the constructed service, you just access the Beer
property like this:
bottle.container.Beer;
A lot happened behind the scenes:
bottle.container.Beer
property was accessed, Bottle looked up the provider and executed the factory to build and return the Beer service.bottle.container.Beer
property was set to be the Beer service instance. Accessing bottle.container.Beer
in the future becomes a simple property lookup.The above example is simple. But, what if the Beer service had dependencies? For example:
var Barley = function() {};
var Hops = function() {};
var Water = function() {};
var Beer = function(barley, hops, water) { /* A beer service, :yum: */ };
You can register services with Bottle#service
and include dependencies like this:
var bottle = new Bottle();
bottle.service('Barley', Barley);
bottle.service('Hops', Hops);
bottle.service('Water', Water);
bottle.service('Beer', Beer, 'Barley', 'Hops', 'Water');
Now, when you access bottle.container.Beer
, Bottle will lazily load all of the dependencies and inject them into your Beer service before returning it.
If you need more complex logic when generating a service, you can register a factory instead. A factory function receives the container as an argument, and should return your constructed service:
var bottle = new Bottle();
bottle.service('Barley', Barley);
bottle.service('Hops', Hops);
bottle.service('Water', Water);
bottle.factory('Beer', function(container) {
var barley = container.Barley;
var hops = container.Hops;
var water = container.Water;
barley.halved();
hops.doubled();
water.spring();
return new Beer(barley, hops, water);
});
This is the meat of the Bottle library. The above methods Bottle#service
and Bottle#factory
are just shorthand for the provider function. You usually can get by with the simple functions above, but if you really need more granular control of your services in different environments, register them as a provider. To use it, pass a constructor for the provider that exposes a $get
function. The $get
function is used as a factory to build your service.
var bottle = new Bottle();
bottle.service('Barley', Barley);
bottle.service('Hops', Hops);
bottle.service('Water', Water);
bottle.provider('Beer', function() {
// This environment may not support water.
// We should polyfill it.
if (waterNotSupported) {
Beer.polyfillWater();
}
// this is the service factory.
this.$get = function(container) {
var barley = container.Barley;
var hops = container.Hops;
var water = container.Water;
barley.halved();
hops.doubled();
water.spring();
return new Beer(barley, hops, water);
};
});
Bottle supports injecting decorators into the provider pipeline with the Bottle#decorator
method. Bottle decorators are just simple functions that intercept a service in the provider phase after it has been created, but before it is accessed for the first time. The function should return the service, or another object to be used as the service instead.
var bottle = new Bottle();
bottle.service('Beer', Beer);
bottle.service('Wine', Wine);
bottle.decorator(function(service) {
// this decorator will be run for both Beer and Wine services.
service.stayCold();
return service;
});
bottle.decorator('Wine', function(wine) {
// this decorator will only affect the Wine service.
wine.unCork();
return wine;
});
Bottle middleware are similar to decorators, but they are executed every time a service is accessed from the container. They are passed the service instance and a next
function:
var bottle = new Bottle();
bottle.service('Beer', Beer);
bottle.middleware(function(service, next) {
// this middleware will be executed for all services
console.log('A service was accessed!');
next();
});
bottle.middleware('Beer', function(beer, next) {
// this middleware will only affect the Beer service.
console.log('Beer? Nice. Tip your bartender...');
next();
});
Middleware can pass an error object to the next
function, and bottle will throw the error:
var bottle = new Bottle();
bottle.service('Beer', Beer);
bottle.middleware('Beer', function(beer, next) {
if (beer.hasGoneBad()) {
return next(new Error('The Beer has gone bad!'));
}
next();
});
// results in Uncaught Error: The Beer has gone bad!(…)
Bottle will generate nested containers if dot notation is used in the service name. An isolated sub container will be created for you based on the name given:
var bottle = new Bottle();
var IPA = function() {};
bottle.service('Beer.IPA', IPA);
bottle.container.Beer; // this is a new Bottle.container object
bottle.container.Beer.IPA; // the service
bottle.factory('Beer.DoubleIPA', function (container) {
var IPA = container.IPA; // note the container in here is the nearest parent.
})
Nested containers are designed to provide isolation between different packages. This means that you cannot access a nested container from a different parent when you are writing a factory.
var bottle = new Bottle();
var IPA = function() {};
var Wort = function() {};
bottle.service('Ingredients.Wort', Wort);
bottle.factory('Beer.IPA', function(container) {
// container is `Beer`, not the root, so:
container.Wort; // undefined
container.Ingredients.Wort; // undefined
});
Used to get an instance of bottle. If a name is passed, bottle will return the same instance. Calling the Bottle constructor as a function will call and return Bottle.pop
, so Bottle.pop('Soda') === Bottle('Soda')
Param | Type | Details |
---|---|---|
name (optional) | String | The name of the bottle. If passed, bottle will store the instance internally and return the same instance if Bottle.pop is subsequently called with the same name. |
Removes the named instance from bottle's internal store, if it exists. The immediately subsequent call to Bottle.pop(name)
will return a new instance. If no name is given, all named instances will be cleared.
In general, this function should only be called in situations where you intend to reset the bottle instance with new providers, decorators, etc. such as test setup.
Param | Type | Details |
---|---|---|
name (optional) | String | The name of the bottle. If passed, bottle will remove the internal instance, if such a bottle was created using Bottle.pop . If not passed, all named internal instances will be cleared. |
Used to list the names of all registered constants, values, and services on the container. Must pass a container to the global static version Bottle.list(bottle.container)
. The instance and container versions return the services that are registered within.
Returns an array of strings.
Param | Type | Details |
---|---|---|
container (optional) | Object | A bottle.container . Only required when using the global, static Bottle.list method. The prototype version uses that instance's container, and the container version uses itself. |
A global configuration object.
Property | Type | Default | Details |
---|---|---|---|
strict | Boolean | false | Enables strict mode. Currently only verifies that automatically injected dependencies are not undefined. |
A collection of decorators registered by the bottle instance. See decorator(name, func)
below
A collection of middleware registered by the bottle instance. See middleware(name, func)
below.
A collection of nested bottles registered by the parent bottle instance when dot notation is used to define a service. See "Nested Bottles" section in the documentation above.
A collection of registered provider names. Bottle uses this internally to determine whether a provider has already instantiated it's instance. See provider(name, Provider)
below.
An array of deferred functions registered for this bottle instance. See defer(func)
below.
Used to add a read only value to the container.
Param | Type | Details |
---|---|---|
name | String | The name of the constant. Must be unique to each Bottle instance. |
value | Mixed | A value that will be defined as enumerable, but not writable. |
Used to register a decorator function that the provider will use to modify your services at creation time. bottle.container.$decorator
is an alias of bottle.decorator
; this allows you to only add a decorator to a nested bottle.
Param | Type | Details |
---|---|---|
name (optional) | String | The name of the service this decorator will affect. Will run for all services if not passed. |
func | Function | A function that will accept the service as the first parameter. Should return the service, or a new object to be used as the service. |
Register a function to be executed when Bottle#resolve
is called.
Param | Type | Details |
---|---|---|
func | Function | A function to be called later. Will be passed a value given to Bottle#resolve . |
Immediately instantiate an array of services and return their instances in the order of the array of instances.
Param | Type | Details |
---|---|---|
services | Array | Array of services that should be instantiated. |
Used to register a service factory
Param | Type | Details |
---|---|---|
name | String | The name of the service. Must be unique to each Bottle instance. |
Factory | Function | A function that should return the service object. Will only be called once; the Service will be a singleton. Gets passed an instance of the container to allow dependency injection when creating the service. |
Used to register a service instance factory that will return an instance when called.
Param | Type | Details |
---|---|---|
name | String | The name of the service. Must be unique to each Bottle instance. |
Factory | Function | A function that should return a fully configured service object. This factory function will be executed when a new instance is created. Gets passed an instance of the container. |
var bottle = new Bottle();
var Hefeweizen = function(container) { return { abv: Math.random() * (6 - 4) + 4 }};
bottle.instanceFactory('Beer.Hefeweizen', Hefeweizen);
var hefeFactory = bottle.container.Beer.Hefeweizen; // This is an instance factory with a single `instance` method
var beer1 = hefeFactory.instance(); // Calls factory function to create a new instance
var beer2 = hefeFactory.instance(); // Calls factory function to create a second new instance
beer1 !== beer2 // true
This pattern is especially useful for request based context objects that store state or things like database connections. See the documentation for Google Guice's InjectingProviders for more examples.
Used to register a middleware function. This function will be executed every time the service is accessed.
Param | Type | Details |
---|---|---|
name (optional) | String | The name of the service for which this middleware will be called. Will run for all services if not passed. |
func | Function | A function that will accept the service as the first parameter, and a next function as the second parameter. Should execute next() to allow other middleware in the stack to execute. Bottle will throw anything passed to the next function, i.e. next(new Error('error msg')) . |
Used to register a service provider
Param | Type | Details |
---|---|---|
name | String | The name of the service. Must be unique to each Bottle instance. |
Provider | Function | A constructor function that will be instantiated as a singleton. Should expose a function called $get that will be used as a factory to instantiate the service. |
Param | Type | Details |
---|---|---|
names (optional) | Array | An array of strings which contains names of the providers to be reset. |
Used to reset providers for the next reference to re-instantiate the provider. If names
param is passed, will reset only the named providers.
Used to register a service, factory, provider, or value based on properties of the Obj. bottle.container.$register
is an alias of bottle.register
; this allows factories and providers to register multiple services on the container without needing access to the bottle instance itself.
If Bottle.config.strict
is set to true
, this method will throw an error if an injected dependency is undefined
.
Param | Type | Details |
---|---|---|
Obj | Object|Function | An object or constructor with one of several properties:
|
Execute any deferred functions registered by Bottle#defer
.
Param | Type | Details |
---|---|---|
data (optional) | Mixed | Value to be passed to each deferred function as the first parameter. |
Used to register a service constructor. If Bottle.config.strict
is set to true
, this method will throw an error if an injected dependency is undefined
.
Param | Type | Details |
---|---|---|
name | String | The name of the service. Must be unique to each Bottle instance. |
Constructor | Function | A constructor function that will be instantiated as a singleton. |
dependency (optional) | String | An optional name for a dependency to be passed to the constructor. A dependency will be passed to the constructor for each name passed to Bottle#service in the order they are listed. |
Used to register a service factory function. Works exactly like factory
except the factory arguments will be injected instead of receiving the container
. This is useful when implementing the Module Pattern or adding dependencies to your Higher Order Functions.
function packageKeg(Barrel, Beer, Love) {
Barrel.add(Beer, Love);
return {
tap : function() {
return Barrel.dispense();
}
};
}
bottle.serviceFactory('Keg', packageKeg, 'Barrel', 'Beer', 'Love');
If Bottle.config.strict
is set to true
, this method will throw an error if an injected dependency is undefined
.
Param | Type | Details |
---|---|---|
name | String | The name of the service. Must be unique to each Bottle instance. |
serviceFactory | Function | A function that will be invoked to create the service object/value. |
dependency (optional) | String | An optional name for a dependency to be passed to the service function. A dependency will be passed to the service function for each name passed to Bottle#serviceFactory in the order they are listed. |
Used to add an arbitrary value to the container.
Param | Type | Details |
---|---|---|
name | String | The name of the value. Must be unique to each Bottle instance. |
val | Mixed | A value that will be defined as enumerable, but not writable. |
A TypeScript declaration file is bundled with this package. To get TypeScript to resolve it automatically, you need to set moduleResolution
to node
in your tsconfig.json
.
Author: Young-steveo
Source Code: https://github.com/young-steveo/bottlejs
License: MIT license
1660901700
Packages are installed to a Prefix
; a folder that acts similar to the /usr/local
directory on Unix-like systems, containing a bin
folder for binaries, a lib
folder for libraries, etc... Prefix
objects can have tarballs install()
'ed within them, uninstall()
'ed from them, etc...
BinaryProvider
has the concept of a Product
, the result of a package installation. LibraryProduct
and ExecutableProduct
are two example Product
object types that can be used to keep track of the binary objects installed by an install()
invocation. Products
can check to see if they are already satisfied (e.g. whether a file exists, or is executable, or is dlopen()
'able), allowing for very quick and easy build.jl
construction.
BinaryProvider
also contains a platform abstraction layer for common operations like downloading and unpacking tarballs. The primary method you should be using to interact with these operations is through the install()
method, however if you need more control, there are more fundamental methods such as download_verify()
, or unpack()
, or even the wittingly-named download_verify_unpack()
.
The method documentation within the BinaryProvider
module should be considered the primary source of documentation for this package, usage examples are provided in the form of the LibFoo.jl
mock package within this repository, as well as other packages that use this package for binary installation such as
To download and install a package into a Prefix
, the basic syntax is:
prefix = Prefix("./deps")
install(url, tarball_hash; prefix=prefix)
It is recommended to inspect examples for a fuller treatment of installation, the LibFoo.jl
package within this repository contains a deps/build.jl
file that may be instructive.
To actually generate the tarballs that are installed by this package, check out the BinaryBuilder.jl
package.
This package contains a run(::Cmd)
wrapper class named OutputCollector
that captures the output of shell commands, and in particular, captures the stdout
and stderr
streams separately, colorizing, buffering and timestamping appropriately to provide seamless printing of shell output in a consistent and intuitive way. Critically, it also allows for saving of the captured streams to log files, a very useful feature for BinaryBuilder.jl
, which makes extensive use of this class, however all commands run by BinaryProvider.jl
also use this same mechanism to provide coloring of stderr
.
When providing ExecutableProduct
s to a client package, BinaryProvider
will automatically append Julia's private library directory to LD_LIBRARY_PATH
on Linux, and DYLD_LIBRARY_PATH
on macOS. This is due to the fact that the compiled binaries may be dependent on libraries such as libgfortran
, which ship with Julia and must be found by the system linker or else the binaries will not function. If you wish to use the binaries outside of Julia, you may need to override those environment variables in a similar fashion; see the generated deps.jl
file for the check_deps()
function where the precise overriding values can be found.
Author: JuliaPackaging
Source Code: https://github.com/JuliaPackaging/BinaryProvider.jl
License: View license
1660744283
Packrat has been soft-deprecated and is now superseded by renv.
While we will continue maintaining Packrat, all new development will focus on renv
. If you're interested in switching to renv
, you can use renv::migrate()
to migrate a project from Packrat to renv
.
packrat
Packrat is a dependency management system for R.
Use packrat to make your R projects more:
See the project page for more information, or join the discussion on the RStudio Community forums.
Read the release notes to learn what's new in Packrat.
Quick-start Guide
Start by installing Packrat:
install.packages("packrat")
Then, start a new R session at the base directory of your project and type:
packrat::init()
This will install Packrat, set up a private library to be used for this project, and then place you in packrat mode
. While in packrat mode, calls to functions like install.packages
and remove.packages
will modify the private project library, rather than the user library.
When you want to manage the state of your private library, you can use the Packrat functions:
packrat::snapshot()
: Save the current state of your library.packrat::restore()
: Restore the library state saved in the most recent snapshot.packrat::clean()
: Remove unused packages from your library.Share a Packrat project with bundle
and unbundle
:
packrat::bundle()
: Bundle a packrat project, for easy sharing.packrat::unbundle()
: Unbundle a packrat project, generating a project directory with libraries restored from the most recent snapshot.Navigate projects and set/get options with:
packrat::on()
, packrat::off()
: Toggle packrat mode on and off, for navigating between projects within a single R session.packrat::get_opts
, packrat::set_opts
: Get/set project-specific settings.Manage ad-hoc local repositories (note that these are a separate entity from CRAN-like repositories):
packrat::set_opts(local.repos = ...)
can be used to specify local repositories; that is, directories containing (unzipped) package sources.packrat::install_local()
installs packages available in a local repository.For example, suppose I have the (unzipped) package sources for digest
located within the folder~/git/R/digest/
. To install this package, you can use:
packrat::set_opts(local.repos = "~/git/R")
packrat::install_local("digest")
There are also utility functions for using and managing packages in the external / user library, and can be useful for leveraging packages in the user library that you might not want as project-specific dependencies, e.g. devtools
, knitr
, roxygen2
:
packrat::extlib()
: Load an external package.packrat::with_extlib()
: With an external package, evaluate an expression. The external package is loaded only for the duration of the evaluated expression, but note that there may be other side effects associated with the package's .onLoad
, .onAttach
and .onUnload
calls that we may not be able to fully control.Workflows
Packrat supports a set of common analytic workflows:
As-you-go
: use packrat::init()
to initialize packrat with your project, and use it to manage your project library while you develop your analysis. As you install and remove packages, you can use packrat::snapshot()
and packrat::restore()
to maintain the R packages in your project. For collaboration, you can either use your favourite version control system, or use packrat::bundle()
to generate a bundled version of your project that collaborators can use with packrat::unbundle()
.
When-you're-done
: take an existing or complete analysis (preferably collected within one directory), and call packrat::init()
to immediately obtain R package sources for all packages used in your project, and snapshot that state so it can hence be preserved across time.
Setting up your own custom, CRAN-like repositories
Please view the set-up guide here for a simple walkthrough in how you might set up your own, local, custom CRAN repository.
Author: rstudio
Source Code: https://github.com/rstudio/packrat
1659787396
Fast, lightweight and zero dependency framework for bunjs 🚀
Bun is the lastest and arguably the fastest runtime environment for javascript, similar to node and deno. Bun uses JSC (JavaScriptCore) engine unlike node and deno which is the part of the reason why it's faster than node and deno.
Bun is written in a low-level manual memory management programming language called ZIG.
Bun supports ~90% of the native nodejs APIs including fs
, path
etc and also distribute it's packages using npm hence both yarn
and npm
are supported in bun.
Colstonjs is a fast, minimal and higly configurable typescript based api framework
highly inspired by Expressjs and fastify for building high performance APIs, colstonjs is completely built on bunjs.
🐎 Bun - Bun needs to be installed locally on your development machine.
💻 To install bun head over to the offical website and follow the installation instructions.
🧑💻 To install coltsonjs run
$ bun add colstonjs
Although colstonjs is distributed under npm, colstonjs is only available for bun, node and deno are not currently supported.
Importing the colstonjs into the application
import Colston from "colstonjs";
// initializing Colston
const serverOptions = {
port: 8000,
env: "development"
};
// initialize app with server options
const app: Colston = new Colston(serverOptions);
A simple get request
// server.ts
...
app.get("/", function(ctx) {
return ctx.status(200).text("OK"); // OK
});
...
To allow the application to accept requests, we have to call the start()
method with an optional port and/or callback function.
This will start an http
sever on the listening on all interfaces (0.0.0.0
) listening on the specified port.
// server.ts
...
server.start(port?, cb?);
port
number can be passed into the app
through the server options or the as the first argument of the start()
mthod. If the the port number is passed as part of the server options and also in the start()
mthod, then port number passed into to the start()
takes priority. If no neither is provided, then the app will default to port 3000
callback
method is immediately invoked once the connection is successfully established and the application is ready to accept requests.
// server.ts
import Colston, { type Context } from "colstonjs";
const app: Colston = new Colston({ env: "development" });
app.set("port", 8000);
app.get("/", (ctx: Context) => {
return ctx.status(200).json({ message: "Hello World!" });
});
// start the server
app.start(app.get('port'), () => console.log(`server listening on port ${app.get("port")}`));
json
or text
// server.ts
import Colston, { type Context } from "colstonjs";
const app: Colston = new Colston({ env: "development" });
app.get("/", async (ctx: Context) => {
const body = await ctx.request.json();
const body2 = await ctx.request.text();
return ctx.status(200).json({ body, body2 });
});
app.start(8000);
// server.ts
import Colston, { type Context } from "colstonjs";
const app: Colston = new Colston({ env: "development" });
app.get("/user/:id/name/:name", async (ctx: Context) => {
const user = ctx.request.params;
// make an api call to a backend datastore a to retrieve usre details
const userDetails = await getUserDetails(details.id); // e.g: { id: 12345, name: "jane"}
return ctx.status(200).json({ user: userDetails});
});
app.start(8000);
// server.ts
import Colston, { type Context } from "colstonjs";
const app: Colston = new Colston({ env: "development" });
app.get('/?name&age', async (ctx: Context) => {
const query = ctx.request.query;
return ctx.status(200).json(query); // { name: "jane", age: 50 }
});
app.start(8000);
Colstonjs also provide the flexibility of method chaining, create one app instance and chain all methods on that single instance.
// server.ts
import Colston, { type Context } from "colstonjs";
const app: Colston = new Colston({ env: "development" });
app
.get("/one", (ctx: Context) => {
return ctx.status(200).text("One");
})
.post("/two", (ctx: Context) => {
return ctx.status(200).text("Two");
})
.patch("/three", (ctx: Context) => {
return ctx.status(200).text("Three");
});
app.start(8000);
note-app
Follow the steps below to run the demo note-taking api application
in the examples
directory.
cd examples/note-app
8000
by running bun app.js
http client
(e.g Postman) to make requests to the listening http server
.Colstonjs support both route
level middleware as well as app
level middleware.
This is a middleware which will be called on each request made to the server, one use case can be for logging.
// logger.ts
export function logger(ctx) {
const { pathname } = new URL(ctx.request.url);
console.info([new Date()], " - - " + ctx.request.method + " " + pathname + " HTTP 1.1" + " - ");
}
// server.ts
import Colston, { type Context } from "colstonjs";
import { logger } from "./logger";
const app: Colston = new Colston({ env: "development" });
// middleware
app.use(logger); // [2022-07-16T01:01:00.327Z] - - GET / HTTP 1.1 -
app.get("/", (ctx: Context) => {
return ctx.status(200).text("Hello logs...");
});
app.start(8000);
The .use()
accepts k
numbers of middleware function.
...
app.use(fn-1, fn-2, fn-3, ..., fn-k)
...
Colston on the other hand allows you to add a middleware function in-between the route path and the handler function.
// request-id.ts
export function requestID(ctx) {
ctx.request.id = crypto.randomBytes(18).toString('hex');
}
// server.ts
import crypto from "crypto";
import Colston, { type Context } from "colstonjs";
import { requestID } from "./request-id";
const app: Colston = new Colston({ env: "development" });
app.get("/", requestID, (ctx: Context) => {
return ctx.status(200).text(`id: ${ctx.request.id}`); // id: 410796b6d64e3dcc1802f290dc2f32155c5b
});
app.start(8000);
It is also worthy to note that we can also have k
numbers of route-level
middleware functions
// server.ts
...
app.get("/", middleware-1, middleware-2, middleware-3, ..., middleware-k, (ctx: Context) => {
return ctx.status(200).text(`id: ${ctx.request.id}`);
});
...
ctx.locals
is a plain javascript object that is specifically added to allow sharing of data amongst the chain of middlewares and/or handler functions.
// server.ts
...
let requestCount = 0;
app.post("/request-count", (ctx, next) => {
/**
* req.locals can be used to pass
* data from one middleware to another
*/
ctx.locals.requestCount = requestCount;
next();
}, (ctx, next) => {
++ctx.locals.requestCount;
next();
}, (ctx) => {
let count = ctx.locals.requestCount;
return ctx.status(200).text(count); // 1
});
Router class provide a way to separate router specific declaration/blocks from the app logic, by providing that extra abstraction layer for your project.
// router.ts
import Router from "Router";
// instantiate the router class
const router1 = new Router();
const router2 = new Router();
// define user routes - can be in a separate file or module.
router1.post('/user', (ctx) => { return ctx.status(200).json({ user }) });
router1.get('/users', (ctx) => { return ctx.json({ users }) });
router1.delete('/user?id', (ctx) => { return ctx.status(204).head() });
// define the notes route - can also be in separate module.
router2.get('/note/:id', (ctx) => { return ctx.json({ note }) });
router2.get('/notes', (ctx) => { return ctx.json({ notes }) });
router2.post('/note', (ctx) => { return ctx.status(201).json({ note }) });
export { router1, router2 };
// server.ts
import Colston from "colstonjs";
import { router1, router2 } from "./router";
const app: Colston = new Colston();
app.all(router1, router2);
// other routes can still be defined here
app.get("/", (ctx) => {
return ctx.status(200).text("Welcome to colstonjs framework for bun");
});
app.start(8000)
The app.all()
method takes in k numbers of router instance objects e.g app.all(router-1, router-2, ..., router-k);
. The example folder contains a full note taking backend app that utilizes this pattern.
We can cache simple data which will leave throughout the application instance lifecycle.
import Colston, { type Context } from "colstonjs";
const app: Colston = new Colston({ env: "development" });
// set properties to cache
app.set("age", 50);
app.set("name", "jane doe");
// check if a key exists in the cache
app.has("age"); // true
app.has("name"); // true
// retrieve the value stored in a given key
app.get("age"); // 50
app.get("name"); // jane doe
app.start(8000);
Errors are handled internally by colstonjs, however this error handler method
can aslo be customised.
// index.ts
import Colston, { type Context } from "colstonjs";
const app: Colston = new Colston({ env: "development" });
// a broken route
app.get("/error", (ctx) => {
throw new Error("This is a broken route");
});
// Custom error handler
app.error = async function (error) {
console.error("This is an error...");
return Response.json(JSON.stringify(
// return custom error here
const err = JSON.stringify(error);
new Error(error.message || "An error occurred" + err);
), { status: 500 });
}
app.start(8000);
Click to expand
Colstonjs
Colsonjs on bunjs
runtime environment
import Colston from "colstonjs";
const app = new Colston({ env: "development" });
app.get("/", (ctx) => {
return ctx.text("OK");
});
app.start(8000)
$ ./k6 run index.js
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: index.js
output: -
scenarios: (100.00%) 1 scenario, 100 max VUs, 40s max duration (incl. graceful stop):
* default: 100 looping VUs for 10s (gracefulStop: 30s)
running (10.0s), 000/100 VUs, 240267 complete and 0 interrupted iterations
default ✓ [======================================] 100 VUs 10s
✓ success
checks.........................: 100.00% ✓ 240267 ✗ 0
data_received..................: 16 MB 1.6 MB/s
data_sent......................: 19 MB 1.9 MB/s
http_req_blocked...............: avg=1.42µs min=0s med=1µs max=9.24ms p(90)=1µs p(95)=2µs
http_req_connecting............: avg=192ns min=0s med=0s max=2.18ms p(90)=0s p(95)=0s
http_req_duration..............: avg=4.1ms min=89µs med=3.71ms max=41.18ms p(90)=5.3ms p(95)=6.53ms
{ expected_response:true }...: avg=4.1ms min=89µs med=3.71ms max=41.18ms p(90)=5.3ms p(95)=6.53ms
http_req_failed................: 0.00% ✓ 0 ✗ 240267
http_req_receiving.............: avg=24.17µs min=7µs med=12µs max=15.01ms p(90)=18µs p(95)=21µs
http_req_sending...............: avg=6.33µs min=3µs med=4µs max=14.78ms p(90)=7µs p(95)=8µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=4.07ms min=75µs med=3.69ms max=41.16ms p(90)=5.27ms p(95)=6.48ms
http_reqs......................: 240267 24011.563111/s
iteration_duration.............: avg=4.15ms min=117.88µs med=3.74ms max=41.25ms p(90)=5.37ms p(95)=6.62ms
iterations.....................: 240267 24011.563111/s
vus............................: 100 min=100 max=100
vus_max........................: 100 min=100 max=100
Express
Expressjs on nodejs
runtime environment
const express = require("express");
const app = express();
app.get("/", (req, res) => {
res.send("OK");
});
app.listen(8000);
$ ~/k6 run index.js
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: index.js
output: -
scenarios: (100.00%) 1 scenario, 100 max VUs, 40s max duration (incl. graceful stop):
* default: 100 looping VUs for 10s (gracefulStop: 30s)
running (10.0s), 000/100 VUs, 88314 complete and 0 interrupted iterations
default ✓ [======================================] 100 VUs 10s
✓ success
checks.........................: 100.00% ✓ 88314 ✗ 0
data_received..................: 20 MB 2.0 MB/s
data_sent......................: 7.1 MB 705 kB/s
http_req_blocked...............: avg=1.54µs min=0s med=1µs max=2.04ms p(90)=1µs p(95)=2µs
http_req_connecting............: avg=451ns min=0s med=0s max=1.99ms p(90)=0s p(95)=0s
http_req_duration..............: avg=11.28ms min=1.22ms med=10.04ms max=90.96ms p(90)=15.04ms p(95)=18.71ms
{ expected_response:true }...: avg=11.28ms min=1.22ms med=10.04ms max=90.96ms p(90)=15.04ms p(95)=18.71ms
http_req_failed................: 0.00% ✓ 0 ✗ 88314
http_req_receiving.............: avg=18.18µs min=10µs med=15µs max=10.16ms p(90)=22µs p(95)=25µs
http_req_sending...............: avg=6.53µs min=3µs med=5µs max=12.61ms p(90)=8µs p(95)=9µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=11.25ms min=1.2ms med=10.01ms max=90.93ms p(90)=15ms p(95)=18.68ms
http_reqs......................: 88314 8818.015135/s
iteration_duration.............: avg=11.32ms min=1.25ms med=10.08ms max=91.01ms p(90)=15.08ms p(95)=18.76ms
iterations.....................: 88314 8818.015135/s
vus............................: 100 min=100 max=100
vus_max........................: 100 min=100 max=100
From the above results we can see that Colsonjs on bun handles ~ 2.72x number of requests per second when compared with Expressjs on node, benchmarking files can be found in this repository.
PRs for features, enhancements and bug fixes are welcomed. ✨ You can also look at the todo file for feature contributions. 🙏🏽
See the TODO doc here, feel free to also add to the list by editing the TODO file.
Although this version is fairly stable, it is actively still under development so also is bunjs and might contain some bugs, hence, not ideal for a production app.
Author: Ajimae
Source Code: https://github.com/ajimae/colstonjs
License: MIT license
1651641960
To use code from another library you need to add a dependency to your Gradle project. Discover how to do that properly, including configuring where Gradle pulls dependencies from and controlling how dependencies end up on the Java classpaths.
▶️Why we need dependencies 0:00
▶️Co-ordinates for locating dependencies 0:34
▶️Configuring Gradle repositories 1:20
▶️Defining your dependency 1:53
▶️The secret to inspecting the Java classpaths 3:22
▶️The 2 dependency notations 3:55
▶️Other popular dependency configurations 4:19
▶️IDE shortcut to easily add dependencies 4:49