Rupert  Beatty

Rupert Beatty


Sitrep: A Source Code analyzer for Swift Projects


Sitrep is a source code analyzer for Swift projects, giving you a high-level overview of your code:

  • A count of your classes, structs, enums, protocols, and extensions.
  • Total lines of code, and also source lines of code (minus comments and whitespace).
  • Which file and type are the longest, plus their source lines of code.
  • Which imports you’re using and how often.
  • How many UIViews, UIViewControllers, and SwiftUI views are in your project.

Behind the scenes, Sitrep captures a lot more information that could be utilized – how many functions you have, how many comments (regular and documentation), how large your enums are, and more. These aren’t currently reported, but could be in a future release. It’s also written as both a library and an executable, so it can be integrated elsewhere as needed.

Sitrep is built using Apple’s SwiftSyntax, which means it parses Swift code accurately and efficiently.

Note: Please make sure that the SwiftSyntax version specified in Package.swift matches your current Swift tools version. For example, if you're using Swift tools 5.3 you need to change the spec from 0.50400.0 to 0.50300.0.


If you want to install the Sitrep command line tool, you have three options: Homebrew, Mint, or building it from the command line yourself.

Use this command for Homebrew:

brew install twostraws/brew/sitrep

Using Homebrew allows you to run sitrep directly from the command line.

For Mint, install and run Sitrep with these command:

mint install twostraws/Sitrep@main
mint run sitrep@main

And finally, to build and install the command line tool yourself, clone the repository and run make install:

git clone
cd Sitrep
make install

As with the Homebrew option, building the command line tool yourself allows you to use the sitrep command directly from the command line.

Using Sitrep as a library

Sitrep is implemented as a library that does all the hard work of scanning and reporting, plus a small front end that handles reading and writing on the command line. As an alternative to using Sitrep from the command line, you can also use its library SitrepCore from inside your own Swift code.

First, add Sitrep as a dependency in your Package.swift file:

let package = Package(
    dependencies: [
        .package(url: "", .branch("master"))

Then import SitrepCore wherever you’d like to use it.

Command line flags

When run on the command line without any flags, Sitrep will automatically scan your current directory and print its findings as text. To control this behavior, Sitrep supports several command line flags:

  • -c lets you specify a path to your .sitrep.yml configuration file, if you have one.
  • -f sets the output format. For example, -f json enables JSON output. The default behavior is text output, which is equivalent to -f text.
  • -i will print debug information, showing the settings Sitrep would use if a real scan were requested, then exits.
  • -p sets the path Sitrep should scan. This defaults to your current working directory.
  • -h prints command line help.


You can customize the behavior of Sitrep by creating a .sitrep.yml file in the directory you wish to scan. This is a YAML file that allows you to provide permanent options for scanning this path, although right now this is limited to one thing: an array of directory names to exclude from the scan.

For example, if you wanted to exclude the .build directory and your tests, you might create a .sitrep.yml file such as this one:

  - .build
  - Tests

You can ask Sitrep to use a custom configuration file using the -c parameter, for example sitrep -c /path/to/.sitrep.yml -p /path/to/swift/project.

Alternatively, you can use the -i parameter to have Sitrep tell you the configuration options it would use in a real analysis run. This will print the configuration information then exit.

Try it yourself

Sitrep is written using Swift 5.3. You can either build and run the executable directly, or integrate the SitrepCore library into your own code.

To build Sitrep, clone this repository and open Terminal in the repository root directory. Then run:

swift build
swift run sitrep -p ~/path/to/your/project/root

If you would like to keep a copy of the sitrep executable around, find it in the .debug directory after running swift build.

To run Sitrep from the command line just provide it with the name of a project directory to parse – it will locate all Swift files recursively from there. Alternatively, just using sitrep by itself will scan the current directory.

Contribution guide

Any help you can offer with this project is most welcome, and trust me: there are opportunities big and small, so that someone with only a small amount of Swift experience can help.

Some suggestions you might want to explore:

  • Converting more of the tracked data (number of functions, parameters to functions, length of functions, etc) into reported data.
  • Reading more data from the parsed files, and using it to calculate things such as cyclomatic complexity.
  • Reading non-Swift data, such as number of storyboard scenes, number of outlets, number of assets in asset catalogs, etc.

Please ensure you write tests to accompany any code you contribute, and that SwiftLint returns no errors or warnings.


Sitrep was designed and built by Paul Hudson, and is copyright © Paul Hudson 2021. Sitrep is licensed under the Apache License v2.0 with Runtime Library Exception; for the full license please see the LICENSE file.

Sitrep is built on top of Apple’s SwiftSyntax library for parsing code, which is also available under the Apache License v2.0 with Runtime Library Exception.

Swift, the Swift logo, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries.

If you find Sitrep useful, you might find my website full of Swift tutorials equally useful: Hacking with Swift.

Download Details:

Author: Twostraws
Source Code: 
License: Apache-2.0 license

#swift #xcode #analyzer 

Sitrep: A Source Code analyzer for Swift Projects
Rupert  Beatty

Rupert Beatty


Tailor: Cross-platform Static analyzer and Linter for Swift


Tailor is a cross-platform static analysis and lint tool for source code written in Apple's Swift programming language. It analyzes your code to ensure consistent styling and help avoid bugs.

Tailor. Cross-platform static analyzer and linter for Swift.

Tailor supports Swift 3.0.1 out of the box and helps enforce style guidelines outlined in the The Swift Programming Language, GitHub, Ray Wenderlich, and Coursera style guides. It supports cross-platform usage and can be run on Mac OS X via your shell or integrated with Xcode, as well as on Linux and Windows.

Tailor parses Swift source code using the primary Java target of ANTLR:

ANTLR is a powerful parser generator [ . . . ] widely used in academia and industry to build all sorts of languages, tools, and frameworks.

About the ANTLR Parser Generator

Getting Started


Requires Java (JRE or JDK) Version 8 or above: Java SE Downloads

Homebrew, Linuxbrew

brew install tailor

Mac OS X (10.10+), Linux

curl -fsSL | sh

Windows (10+)

iex (new-object net.webclient).downloadstring('')


You may also download Tailor via GitHub Releases, extract the archive, and symlink the tailor/bin/tailor shell script to a location in your $PATH.

Continuous Integration

If your continuous integration server supports Homebrew installation, you may use the following snippet:

  - brew update
  - brew install tailor

In other cases, use this snippet:

Replace ${TAILOR_RELEASE_ARCHIVE} with the URL of the release you would like to install, e.g.

  - wget ${TAILOR_RELEASE_ARCHIVE} -O /tmp/tailor.tar
  - tar -xvf /tmp/tailor.tar
  - export PATH=$PATH:$PWD/tailor/bin/


Run Tailor with a list of files and directories to analyze, or via Xcode.

$ tailor [options] [--] [[file|directory] ...]

Help for Tailor is accessible via the [-h|--help] option.

$ tailor -h
Usage: tailor [options] [--] [[file|directory] ...]

Perform static analysis on Swift source files.

Invoking Tailor with at least one file or directory will analyze all Swift files at those paths. If
no paths are provided, Tailor will analyze all Swift files found in '$SRCROOT' (if defined), which
is set by Xcode when run in a Build Phase. Tailor may be set up as an Xcode Build Phase
automatically with the --xcode option.

 -c,--config=<path/to/.tailor.yml>             specify configuration file
    --debug                                    print ANTLR error messages when parsing error occurs
    --except=<rule1,rule2,...>                 run all rules except the specified ones
 -f,--format=<xcode|json|cc|html>              select an output format
 -h,--help                                     display help
    --invert-color                             invert colorized console output
 -l,--max-line-length=<0-999>                  maximum Line length (in characters)
    --list-files                               display Swift source files to be analyzed
    --max-class-length=<0-999>                 maximum Class length (in lines)
    --max-closure-length=<0-999>               maximum Closure length (in lines)
    --max-file-length=<0-999>                  maximum File length (in lines)
    --max-function-length=<0-999>              maximum Function length (in lines)
    --max-name-length=<0-999>                  maximum Identifier name length (in characters)
    --max-severity=<error|warning (default)>   maximum severity
    --max-struct-length=<0-999>                maximum Struct length (in lines)
    --min-name-length=<1-999>                  minimum Identifier name length (in characters)
    --no-color                                 disable colorized console output
    --only=<rule1,rule2,...>                   run only the specified rules
    --purge=<1-999>                            reduce memory usage by clearing DFA cache after
                                               specified number of files are parsed
    --show-rules                               show description for each rule
 -v,--version                                  display version
    --xcode=<path/to/project.xcodeproj>        add Tailor Build Phase Run Script to Xcode Project


Enabling and Disabling Rules

Rule identifiers and "preferred/not preferred" code samples may be found on the Rules page.

Rules may be individually disabled (blacklist) or enabled (whitelist) via the --except and --only command-line flags.


tailor --except=brace-style,trailing-whitespace main.swift


tailor --only=redundant-parentheses,terminating-semicolon main.swift


Tailor may be used on Mac OS X via your shell or integrated with Xcode, as well as on Linux and Windows.


Tailor on Ubuntu


Tailor on Windows

Automatic Xcode Integration

Tailor can be integrated with Xcode projects using the --xcode option.

tailor --xcode /path/to/demo.xcodeproj/

This adds the following Build Phase Run Script to your project's default target. Run Script

Tailor's output will be displayed inline within the Xcode Editor Area and as a list in the Log Navigator. Xcode messages

Configure Xcode to Analyze Code Natively (⇧⌘B)

Add a new configuration, say Analyze, to the project

screen shot 2016-11-30 at 12 29 34 am

Modify the active scheme's Analyze phase to use the new build configuration created above

screen shot 2016-11-30 at 12 37 08 am

Tweak the build phase run script to run Tailor only when analyzing the project (⇧⌘B)

if [ "${CONFIGURATION}" = "Analyze" ]; then
    if hash tailor 2>/dev/null; then
        echo "warning: Please install Tailor from"

Colorized Output

Tailor uses the following color schemes to format CLI output:

Dark theme (enabled by default) Dark theme

Light theme (enabled via --invert-color option) Light theme

No color theme (enabled via --no-color option) No color

Warnings, Errors, and Failing the Build

--max-severity can be used to control the maximum severity of violation messages. It can be set to error or warning (by default, it is set to warning). Setting it to error allows you to distinguish between lower and higher priority messages. It also fails the build in Xcode, if any errors are reported (similar to how a compiler error fails the build in Xcode). With max-severity set to warning, all violation messages are warnings and the Xcode build will never fail.

This setting also affects Tailor's exit code on the command-line, a failing build will exit 1 whereas having warnings only will exit 0, allowing Tailor to be easily integrated into pre-commit hooks.

Disable Violations within Source Code

Violations on a specific line may be disabled with a trailing single-line comment.

import Foundation; // tailor:disable

Additionally, violations in a given block of code can be disabled by enclosing the block within tailor:off and tailor:on comments.

// tailor:off
import Foundation;
import UIKit;
import CoreData;
// tailor:on

class Demo() {
  // Define public members here


  • // tailor:off and // tailor:on comments must be paired


The behavior of Tailor can be customized via the .tailor.yml configuration file. It enables you to

  • include/exclude certain files and directories from analysis
  • enable and disable specific analysis rules
  • specify output format
  • specify CLI output color scheme

You can tell Tailor which configuration file to use by specifying its file path via the --config CLI option. By default, Tailor will look for the configuration file in the directory where you will run Tailor from.

The file follows the YAML 1.1 format.

Including/Excluding files

Tailor checks all files found by a recursive search starting from the directories given as command line arguments. However, it only analyzes Swift files that end in .swift. If you would like Tailor to analyze specific files and directories, you will have to add entries for them under include. Files and directories can also be ignored through exclude.

Here is an example that might be used for an iOS project:

    - Source            # Inspect all Swift files under "Source/"
    - '**Tests.swift'   # Ignore Swift files that end in "Tests"
    - Source/Carthage   # Ignore Swift files under "Source/Carthage/"
    - Source/Pods       # Ignore Swift files under "Source/Pods/"


  • Files and directories are specified relative to where tailor is run from
  • Paths to directories or Swift files provided explicitly via CLI will cause the include/exclude rules specified in .tailor.yml to be ignored
  • Exclude is given higher precedence than Include
  • Tailor recognizes the Java Glob syntax

Enabling/Disabling rules

Tailor allows you to individually disable (blacklist) or enable (whitelist) rules via the except and only labels.

Here is an example showcasing how to enable certain rules:

# Tailor will solely check for violations to the following rules
    - upper-camel-case
    - trailing-closure
    - forced-type-cast
    - redundant-parentheses

Here is an example showcasing how to disable certain rules:

# Tailor will check for violations to all rules except for the following ones
    - parenthesis-whitespace
    - lower-camel-case


  • only is given precedence over except
  • Rules that are explicitly included/excluded via CLI will cause the only/except rules specified in .tailor.yml to be ignored

Specifying output format

Tailor allows you to specify the output format (xcode/json) via the format label.

Here is an example showcasing how to specify the output format:

# The output format will now be in JSON
format: json


  • The output format explicitly specified via CLI will cause the output format defined in .tailor.yml to be ignored

Specifying CLI output color scheme

Tailor allows you to specify the CLI output color schemes via the color label. To disable colored output, set color to disable. To invert the color scheme, set color to invert.

Here is an example showcasing how to specify the CLI output color scheme:

# The CLI output will not be colored
color: disable


  • The CLI output color scheme explicitly specified via CLI will cause the output color scheme defined in .tailor.yml to be ignored


Tailor's output format may be customized via the -f/--format option. The Xcode formatter is selected by default.

Xcode Formatter (default)

The default xcode formatter outputs violation messages according to the format expected by Xcode to be displayed inline within the Xcode Editor Area and as a list in the Log Navigator. This format is also as human-friendly as possible on the console.

$ tailor main.swift

********** /main.swift **********
/main.swift:1:    warning: [multiple-imports] Imports should be on separate lines
/main.swift:1:18: warning: [terminating-semicolon] Statements should not terminate with a semicolon
/main.swift:3:05: warning: [constant-naming] Global Constant should be either lowerCamelCase or UpperCamelCase
/main.swift:5:07: warning: [redundant-parentheses] Conditional clause should not be enclosed within parentheses
/main.swift:7:    warning: [terminating-newline] File should terminate with exactly one newline character ('\n')

Analyzed 1 file, skipped 0 files, and detected 5 violations (0 errors, 5 warnings).

JSON Formatter

The json formatter outputs an array of violation messages for each file, and a summary object indicating the parsing results and the violation counts.

$ tailor -f json main.swift
  "files": [
      "path": "/main.swift",
      "violations": [
          "severity": "warning",
          "rule": "constant-naming",
          "location": {
            "line": 1,
            "column": 5
          "message": "Global Constant should be either lowerCamelCase or UpperCamelCase"
      "parsed": true
  "summary": {
    "violations": 1,
    "warnings": 1,
    "analyzed": 1,
    "errors": 0,
    "skipped": 0

HTML Formatter

The html formatter outputs a complete HTML document that should be written to a file.

tailor -f html main.swift > tailor.html

HTML format


Please review the guidelines for contributing to this repository.

Development Environment

External Tools and Libraries

Development & Runtime

ANTLR 4.5The BSD License
Apache Commons CLIApache License, Version 2.0
JansiApache License, Version 2.0
SnakeYAMLApache License, Version 2.0
GsonApache License, Version 2.0
Mustache.javaApache License, Version 2.0

Development Only

GradleApache License, Version 2.0
Travis CIFree for Open Source Projects
JUnitEclipse Public License 1.0
Java HamcrestThe BSD 3-Clause License
FindBugsGNU Lesser General Public License
CheckstyleGNU Lesser General Public License
JaCoCoEclipse Public License v1.0
CoverallsFree for Open Source
CodacyFree for Open Source
System RulesCommon Public License 1.0

Download Details:

Author: Sleekbyte
Source Code: 
License: MIT license

#swift #apple #linter #static #analyzer 

Tailor: Cross-platform Static analyzer and Linter for Swift
Rupert  Beatty

Rupert Beatty


BuildTimeanalyzer-for-Xcode: Build Time Analyzer for Swift

Build Time Analyzer for Xcode


Build Time Analyzer is a macOS app that shows you a break down of Swift build times. See this post and this post on Medium for context.


Open up the app and follow the instructions.



Download the code and open it in Xcode, archive the project and export the build. Easy, right?


If you encounter any issues or have ideas for improvement, I am open to code contributions.

Download Details:

Author: RobertGummesson
Source Code: 
License: MIT license

#swift #time #xcode #analyzer 

BuildTimeanalyzer-for-Xcode: Build Time Analyzer for Swift

10 Popular Golang Libraries for Morphological Analyzers

In today's post we will learn about 10 Popular Golang Libraries for Morphological Analyzers. 

What is Morphological Analyzers?

Morphological Analyzer is a program for analyzing the morphology of an input word; the analyzer reads the inflected surface form of each word in a text and provides its lexical form while Generation is the inverse process. Both Analysis and Generation make use of lexicon.

Table of contents:

  • Porter: This is a fairly straightforward port of Martin Porter's C implementation of the Porter stemming algorithm.
  • Go2vec - Reader and utility functions for word2vec embeddings.
  • Golibstemmer - Go bindings for the snowball libstemmer library including porter 2.
  • Gosentiwordnet - Sentiment analyzer using sentiwordnet lexicon in Go.
  • Govader - Go implementation of VADER Sentiment Analysis.
  • Govader-backend - Microservice implementation of GoVader.
  • Kagome - JP morphological analyzer written in pure Go.
  • Libtextcat - Cgo binding for libtextcat C library. Guaranteed compatibility with version 2.2.
  • NLP - Extract values from strings and fill your structs with nlp.
  • NLP - Go Natural Language Processing library supporting LSA (Latent Semantic Analysis).

1 - Porter: 

This is a fairly straightforward port of Martin Porter's C implementation of the Porter stemming algorithm.

The original algorithm is described in the paper:

M.F. Porter, 1980, An algorithm for suffix stripping, Program, 14(3) pp

While the internal implementation and interface is nearly identical to the original implementation, the Go interface is much simplified. The stemmer can be called as follows:

import "porter"
stemmed := porter.Stem(word_to_stem)


go get

to use the stemmer when installed using goinstall, import:

import ""


While the implementation is fairly robust, this is a work in progress. In particular, a new interface will likely be provided to prevent excessive conversions between strings and []byte. Currently, on calling Stem the string argument is converted to a byte slice which the algorithm works on and is converted back into a string before returning.

Also, the implementation is not particularly robust at handling Unicode input, currently, only bytes with the high bit set are ignored. It's up to the caller to make sure the string contains only ASCII characters. Since the algorithm itself operates on English words only, this doens't restrict the functionality, but it is nuisance.


  • byte slice API to void roundtripping to string and back

View on Github

2- Go2vec:

Reader and utility functions for word2vec embeddings.

This is a package for reading word2vec vectors in Go and finding similar words and analogies.


This package can be installed with the go command:

go get

To install the command-line utilities, use:

go get

The package documentation is available at:

View on Github

3 - Golibstemmer: 

Go bindings for the snowball libstemmer library including porter 2.

This simple library provides Go (golang) bindings for the snowball libstemmer library including the popular porter and porter2 algorithms.


You'll need the development package of libstemmer, usually this is simply a matter of:

sudo apt-get install libstemmer-dev

... or you might need to install it from source.


First, ensure you have your GOPATH env variable set to the root of your Go project:

export GOPATH=`pwd`
export PATH=$PATH:$GOPATH/bin

Then this cute statement should do the trick:

go get


Basic usage:

package main

import ""
import "fmt"
import "os"

func main() {
    s, err := stemmer.NewStemmer("english")
    defer s.Close()
    if err != nil {
        fmt.Println("Error creating stemmer: "+err.Error())
    word := s.StemWord("happy")

To get a list of supported stemming algorithms:

list := stemmer.GetSupportedLanguages()

View on Github

4 - Gosentiwordnet: 

Sentiment analyzer using sentiwordnet lexicon in Go.

Sentiment analyzer using sentiwordnet lexicon in Go. This library produce sentiment score for each word, including positive, negative, and objective score.

⚙ Installation

First of all, download and install Go 1.14 or higher is required.

Install this library using the go get command:

$ go get

⚡ Quickstart

package main

import (

    goswn ""

func main() {
    sa := goswn.New()

    scores, exist := sa.GetSentimentScore("love", "v", "2")
    if exist {
        fmt.Println("💬 Sentiment score:", scores) // => 💬 Sentiment score: {1 0 0}

The GetSentimentScore required 3 parameters(word, pos-tag, and word usage):

  1. Word: the word want to process
  2. POS tag: part-of-speech tag of the word
  3. Word usage: 1 for most common usage and a higher number would indicate lesser common usages

View on Github

5 - Govader: 

Go implementation of VADER Sentiment Analysis.

GoVader: Vader sentiment analysis in Go

This is a port of from Python to Go.

There are tests which check it gives the same answers as the original package.


import (

analyzer := govader.NewSentimentIntensityAnalyzer()
sentiment := analyzer.PolarityScores("Usage is similar to all the other ports.")

fmt.Println("Compound score:", sentiment.Compound)
fmt.Println("Positive score:", sentiment.Positive)
fmt.Println("Neutral score:", sentiment.Neutral)
fmt.Println("Negative score:", sentiment.Negative)

A server wrapper is available in

View on Github

6 - Govader-backend:

Microservice implementation of GoVader.

Govader-Backend is a microservice thats returns sentimental analysis of given sentence.


go get
package main

import (
	vaderMicro ""
	echo ""

func main() {
	e := echo.New()
	err := vaderMicro.Serve(e, "8080")
	if err != nil {


Sample Get Request:

GET: http://localhost:8080?text=I%20am%20looking%20good

Sample Post Request:

POST: http://localhost:8080/

RequestBody: {"text": "I am looking good"}

Sample Response

  "Negative": 0,
  "Neutral": 0.5084745762711864,
  "Positive": 0.4915254237288135,
  "Compound": 0.44043357076016854

View on Github

7 - Kagome: 

JP morphological analyzer written in pure Go.

Kagome is an open source Japanese morphological analyzer written in pure golang. The dictionary/statistical models such as MeCab-IPADIC, UniDic (unidic-mecab) and so on, are able to be embedded in binaries.

Improvements from v1.

  • Dictionaries are maintained in a separate repository, and only the dictionaries you need are embedded in the binary.
  • Brushed up and added several APIs.

Programming example

package main

import (


func main() {
	t, err := tokenizer.New(ipa.Dict(), tokenizer.OmitBosEos())
	if err != nil {
	// wakati
	seg := t.Wakati("すもももももももものうち")

	// tokenize
	tokens := t.Tokenize("すもももももももものうち")
	for _, token := range tokens {
		features := strings.Join(token.Features(), ",")
		fmt.Printf("%s\t%v\n", token.Surface, features)


[すもも も もも も もも の うち]
すもも	名詞,一般,*,*,*,*,すもも,スモモ,スモモ
も	助詞,係助詞,*,*,*,*,も,モ,モ
もも	名詞,一般,*,*,*,*,もも,モモ,モモ
も	助詞,係助詞,*,*,*,*,も,モ,モ
もも	名詞,一般,*,*,*,*,もも,モモ,モモ
の	助詞,連体化,*,*,*,*,の,ノ,ノ
うち	名詞,非自立,副詞可能,*,*,*,うち,ウチ,ウチ



Go 1.16 or later.

go install

Or use go get.

env GO111MODULE=on go get -u

Homebrew tap

brew install ikawaha/kagome/kagome

View on Github

8 - Libtextcat: 

Cgo binding for libtextcat C library. Guaranteed compatibility with version 2.2.


Installation consists of several simple steps. They may be a bit different on your target system (e.g. require more permissions) so adapt them to the parameters of your system.

Get libtextcat C library code

NOTE: If this link is not working or there are some problems with downloading, there is a stable version 2.2 snapshot saved in Downloads.

Build and install libtextcat C library

From the directory, where you unarchived libtextcat, run:

sudo make install
sudo ldconfig 

Install Go wrapper

go get
go test (must PASS)

Installation notes

Make sure that you have your local library paths set correctly and that installation was successful. Otherwise, go build or go test may fail.

libtextcat is installed in your local library directory (e.g. /usr/local/lib) and puts its libraries there. This path should be registered in your system (using ldconfig or exporting LD_LIBRARY_PATH, etc.) or the linker would fail.


cat, err := NewTextCat(ConfigPath) // See 'Usage notes' section

if nil != err {
    // ... Handle error ...
defer cat.Close()

matches, err := cat.Classify(text)

if nil != err {
    // ... Handle error ...

// Use matches. 
// NOTE: matches[0] is the best match.

View on Github

9 - NLP: 

Extract values from strings and fill your structs with nlp.

nlp is a general purpose any-lang Natural Language Processor that parses the data inside a text and returns a filled model

Supported types

int  int8  int16  int32  int64
uint uint8 uint16 uint32 uint64
float32 float64


// go1.8+ is required
go get -u

Feel free to create PR's and open Issues :)

How it works

You will always begin by creating a NL type calling nlp.New(), the NL type is a Natural Language Processor that owns 3 funcs, RegisterModel(), Learn() and P().

RegisterModel(i interface{}, samples []string, ops ...ModelOption) error

RegisterModel takes 3 parameters, an empty struct, a set of samples and some options for the model.

The empty struct lets nlp know all possible values inside the text, for example:

type Song struct {
	Name        string // fields must be exported
	Artist      string
	ReleasedAt  time.Time
err := nl.RegisterModel(Song{}, someSamples, nlp.WithTimeFormat("2006"))
if err != nil {
// ...

tells nlp that inside the text may be a Song.Name, a Song.Artist and a Song.ReleasedAt.

The samples are the key part about nlp, not just because they set the limits between keywords but also because they will be used to choose which model use to handle an expression.

Samples must have a special syntax to set those limits and keywords.

songSamples := []string{
	"play {Name} by {Artist}",
	"play {Name} from {Artist}",
	"play {Name}",
	"from {Artist} play {Name}",
	"play something from {ReleasedAt}",

In the example below, you can see we're reffering to the Name and Artist fields of the Song type declared above, both {Name} and {Artist} are our keywords and yes! you guessed it! Everything between play and by will be treated as a {Name}, and everything that's after by will be treated as an {Artist} meaning that play and by are our limits.

┌┴─┐        ┌┴┐
play {Name} by  {Artist}
     └─┬──┘     └───┬──┘

Any character can be a limit, a , for example can be used as a limit.

keywords as well as limits are CaseSensitive so be sure to type them right.

Note that putting 2 keywords together will cause that only 1 or none of them will be detected

limits are important - Me :3

View on Github

10 - NLP: 

Go Natural Language Processing library supporting LSA (Latent Semantic Analysis).

Implementations of selected machine learning algorithms for natural language processing in golang. The primary focus for the package is the statistical semantics of plain-text documents supporting semantic analysis and retrieval of semantically similar documents.

Built upon the Gonum package for linear algebra and scientific computing with some inspiration taken from Python's scikit-learn and Gensim.

Check out the companion blog post or the Go documentation page for full usage and examples.



  • Expanded persistence support
  • Stemming to treat words with common root as the same e.g. "go" and "going"
  • Clustering algorithms e.g. Heirachical, K-means, etc.
  • Classification algorithms e.g. SVM, KNN, random forest, etc.

View on Github

Thank you for following this article.

Related videos:

Golang Web Frameworks You MUST Learn

#go #golang #analyzer 

10 Popular Golang Libraries for Morphological Analyzers
Annie  Emard

Annie Emard


CheckMake: Experimental Linter/Analyzer for Makefiles



checkmake is an experimental tool for linting and checking Makefiles. It may not do what you want it to.


% checkmake Makefile

% checkmake --help

checkmake [--debug|--config=<configPath>] <makefile>
checkmake -h | --help
checkmake --version

-h --help               Show this screen.
--version               Show version.
--debug                 Enable debug mode
--config=<configPath>   Configuration file to read
--list-rules            List registered rules

% checkmake fixtures/missing_phony.make

      RULE                 DESCRIPTION             LINE NUMBER

  minphony        Missing required phony target    0
  minphony        Missing required phony target    0
  phonydeclared   Target "all" should be           18
                  declared PHONY.

Docker usage

Build the image, or pull it:

docker build --build-arg BUILDER_NAME='Your Name' --build-arg . -t checker

Then run it with your Makefile attached, below is an example of it assuming the Makefile is in your current working directory:

docker run -v "$PWD"/Makefile:/Makefile checker



The pandoc document converter utility is required to run checkmate. You can find out if you have it via which pandoc. Install pandoc if the command was not found.


There are packages for linux up on or build it yourself with the steps below.


To build checkmake you will need to have golang installed. Once you have Go installed, you can simply clone the repo and build the binary and man page yourself with the following commands.

git clone
cd checkmake


This is totally inspired by an idea by Dan Buch.

Author: mrtazz
Source Code:
License: MIT License

#go #linter #analyzer 

CheckMake: Experimental Linter/Analyzer for Makefiles
Annie  Emard

Annie Emard


Manalyze: A Static Analyzer for PE Executables



My work on Manalyze started when my antivirus tried to quarantine my malware sample collection for the thirtieth time. It is also born from my increasing frustration with AV products which make decisions without ever explaining why they deem a file malicious. Obviously, most people are better off having an antivirus decide what's best for them. But it seemed to me that expert users (i.e. malware analysts) could use a tool which would analyze a PE executable, provide as many data as possible, and leave the final call to them.

If you want to see some sample reports generated by the tool, feel free to try out the web service I created for it:

Table of Contents

A static analyzer for PE files

Manalyze was written in C++ for Windows and Linux and is released under the terms of the GPLv3 license. It is a robust parser for PE files with a flexible plugin architecture which allows users to statically analyze files in-depth. Manalyze...

  • Identifies a PE's compiler
  • Detects packed executables
  • Applies ClamAV signatures
  • Searches for suspicious strings
  • Looks for malicious import combinations (i.e. WriteProcessMemory + CreateRemoteThread)
  • Detects cryptographic constants (just like IDA's findcrypt plugin)
  • Can submit hashes to VirusTotal
  • Verifies authenticode signatures (on Windows only)

How to build

There are few things I hate more than checking out an open-source project and spending two hours trying to build it. This is why I did my best to make Manalyze as easy to build as possible. If these few lines don't work for you, then I have failed at my job and you should drop me a line so I can fix this.

On Linux and BSD (tested on Debian Buster and FreeBSD 10.2)

$> [sudo or as root] apt-get install libboost-regex-dev libboost-program-options-dev libboost-system-dev libboost-filesystem-dev libssl-dev build-essential cmake git
$> [alternatively, also sudo or as root] pkg install boost-libs-1.55.0_8 libressl cmake git
$> git clone && cd Manalyze
$> cmake .
$> make -j5
$> cd bin && ./manalyze --version

Finally, if you want to access Manalyze from every directory on your machine, install it using $> make install from the root folder of the project.

On Windows

  • Get the Boost libraries from and install CMake.
  • Build the boost libraries
    • cd boost_1_XX_0 && ./bootstrap.bat && ./b2.exe --build-type=complete --with-regex --with-program_options --with-system --with-filesystem
    • Add an environment variable BOOST_ROOT which contains the path to your boost_1_XX_0 folder.
  • Download and install Git
  • git clone && cd Manalyze && cmake .
  • A Visual Studio project manalyze.sln should have appeared in the Manalyze folder!

On OS X (tested on Mojave)

# Skip these two lines if you already have a sane build environment
user$ xcode-select --install
user$ sudo installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg -target /

user$ git clone && cd Manalyze
user$ brew install openssl boost
user$ cmake . -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl/ && make -j5
user$ bin && ./manalyze --version

Offline builds

If you need to build Manalyze on a machine with no internet access, you have to manually check out the following projects:

Place the two folders in the external folder as external/yara and external/hash-library respectively. Then run cmake . -DGitHub=OFF and continue as you normally would.


Docker image

A Docker image for Manalyze is provided by the community. Run docker pull evanowe/manalyze and get additional information here.

Generating ClamAV rules

Since ClamAV signatures are voluminous and updated regularly, it didn't make a lot of sense to distribute them from GitHub or with the binary. When you try using the ClamAV plugin for the first time, you will likely encounter the following error message: [!] Error: Could not load yara_rules/clamav.yara. In order to generate them, simply run the Python script located in bin/yara_rules.

Run the script whenever you want to refresh the signatures.


$ ./manalyze.exe --help
  -h [ --help ]         Displays this message.
  -v [ --version ]      Prints the program's version.
  --pe arg              The PE to analyze. Also accepted as a positional
                        argument. Multiple files may be specified.
  -r [ --recursive ]    Scan all files in a directory (subdirectories will be
  -o [ --output ] arg   The output format. May be 'raw' (default) or 'json'.
  -d [ --dump ] arg     Dump PE information. Available choices are any
                        combination of: all, summary, dos (dos header), pe (pe
                        header), opt (pe optional header), sections, imports,
                        exports, resources, version, debug, tls, config, delay, rich
  --hashes              Calculate various hashes of the file (may slow down the
  -x [ --extract ] arg  Extract the PE resources to the target directory.
  -p [ --plugins ] arg  Analyze the binary with additional plugins. (may slow
                        down the analysis!)

Available plugins:
  - clamav: Scans the binary with ClamAV virus definitions.
  - compilers: Tries to determine which compiler generated the binary.
  - peid: Returns the PEiD signature of the binary.
  - strings: Looks for suspicious strings (anti-VM, process names...).
  - findcrypt: Detects embedded cryptographic constants.
  - packer: Tries to structurally detect packer presence.
  - imports: Looks for suspicious imports.
  - resources: Analyzes the program's resources.
  - mitigation: Displays the enabled exploit mitigation techniques (DEP, ASLR, etc.).
  - overlay: Analyzes data outside of the PE's boundaries.
  - authenticode: Checks if the digital signature of the PE is valid.
  - virustotal: Checks existing AV results on VirusTotal.
  - all: Run all the available plugins.

  manalyze.exe program.exe
  manalyze.exe -dresources -dexports -x out/ program.exe
  manalyze.exe --dump=imports,sections --hashes program.exe
  manalyze.exe -r malwares/ --plugins=peid,clamav --dump all

People using Manalyze

Contact me or open a pull request if you would like to be added to this list!

Author: JusticeRage
Source Code:
License: GPL-3.0 License


Manalyze: A Static Analyzer for PE Executables
Annie  Emard

Annie Emard


BinSkim: A Binary Static Analysis Tool

BinSkim Binary Analyzer

This repository contains the source code for BinSkim, a Portable Executable (PE) light-weight scanner that validates compiler/linker settings and other security-relevant binary characteristics.

For Developers

  1. Fork the repository -- Need Help?
  2. Load and compile src\BinSkim.sln to develop changes for contribution.
  3. Execute BuildAndTest.cmd at the root of the enlistment to validate before submitting a PR.

Submit Pull Requests

  1. Run BuildAndTest.cmd at the root of the enlistment to ensure that all tests pass, release build succeeds, and NuGet packages are created
  2. Submit a Pull Request to the 'develop' branch -- Need Help?

For Users

  1. Download BinSkim from NuGet
  2. Read the User Guide
  3. Find out more about the Static Analysis Results Interchange Format (SARIF) used to output Binskim results

How to extract the exe file from the nuget package

If you only want to run the Binskim tool without installing anything, then you can

  1. Download BinSkim from NuGet
  2. Rename the file extension from .nupkg to .zip
  3. Unzip
  4. Executable files are now available in the folder tools\netcoreapp3.1

Command-Line Quick Guide

Argument (short form, long form)Meaning
--sympathSymbols path value (e.g. SRV or Cache d:\symbols;Srv http://symweb)
-o, --outputFile path used to write and output analysis using SARIF
-r, --recurseRecurse into subdirectories when evaluating file specifier arguments
-c, --config(Default: ‘default’) Path to policy file to be used to configure analysis. Passing value of 'default' (or omitting the argument) invokes built-in settings
-q, --quietDo not log results to the console
-s, --statisticsGenerate timing and other statistics for analysis session
-h, --hashesOutput hashes of analysis targets when emitting SARIF reports
-e, --environment

Log machine environment details of run to output file.

WARNING: This option records potentially sensitive information (such as all environment variable values) to the log file.

-p, --pluginPath to plugin that will be invoked against all targets in the analysis set.
--levelFilter output of scan results to one or more failure levels. Valid values: Error, Warning and Note.
--kindFilter output one or more result kinds. Valid values: Fail (for literal scan results), Pass, Review, Open, NotApplicable and Informational.
--traceExecution traces, expressed as a semicolon-delimited list, that should be emitted to the console and log file (if appropriate). Valid values: PdbLoad.
--helpTable of argument information.
--versionBinSkim version details.
value pos. 0One or more specifiers to a file, directory, or filter pattern that resolves to one or more binaries to analyze.

Example: binskim.exe analyze c:\bld\*.dll --recurse --output MyRun.sarif

Author: Microsoft
Source Code:
License: View license

#binary #analyzer 

BinSkim: A Binary Static Analysis Tool
Annie  Emard

Annie Emard


Roslyn Analyzers: The Compiler Platform for .NET

Roslyn Analyzers

What is Roslyn?

Roslyn is the compiler platform for .NET. It consists of the compiler itself and a powerful set of APIs to interact with the compiler. The Roslyn platform is hosted at

What are Roslyn Analyzers?

Roslyn analyzers analyze your code for style, quality and maintainability, design and other issues. The documentation for Roslyn Analyzers can be found at

Microsoft created a set of analyzers called Microsoft.CodeAnalysis.NetAnalyzers that contains the most important "FxCop" rules from static code analysis, converted to Roslyn analyzers, in addition to more analyzers. These analyzers check your code for security, performance, and design issues, among others. The documentation for .NET analyzers can be found here.

Main analyzers

Recently the set of analyzer packages produced by this repository have been consolidated. The following table summarizes this information:

NuGet Package Name Summary
Microsoft.CodeAnalysis.NetAnalyzers ✔️ Primary analyzer package for this repo. Included default for .NET 5+. For earlier targets read more.
Microsoft.CodeAnalysis.BannedApiAnalyzers ✔️ Allows banning use of arbitrary code. Read more.
Microsoft.CodeAnalysis.PublicApiAnalyzers ✔️ Helps library authors monitor changes to their public APIs. Read more.
Microsoft.CodeAnalysis.Analyzers ⚠️ Intended projects providing analyzers and code fixes. Read more.
Roslyn.Diagnostics.Analyzers ⚠️ Rules specific to the Roslyn project, not intended for general consumption. Read more.
Microsoft.CodeAnalysis.FxCopAnalyzers ⛔ Use Microsoft.CodeAnalysis.NetAnalyzers instead. Read more.
Microsoft.CodeQuality.Analyzers ⛔ Use Microsoft.CodeAnalysis.NetAnalyzers instead. Read more.
Microsoft.NetCore.Analyzers ⛔ Use Microsoft.CodeAnalysis.NetAnalyzers instead. Read more.
Microsoft.NetFramework.Analyzers ⛔ Use Microsoft.CodeAnalysis.NetAnalyzers instead. Read more.


Latest pre-release version (.NET6 analyzers): here

Latest pre-release version (.NET7 analyzers): here

This is the primary analyzer package for this repo that contains all the .NET code analysis rules (CAxxxx) that are built into the .NET SDK starting .NET5 release. The documentation for CA rules can be found at

You do not need to manually install this NuGet package to your project if you are using .NET5 SDK or later. These analyzers are enabled by default for projects targeting .NET5 or later. For projects targeting earlier .NET frameworks, you can enable them in your MSBuild project file by setting one of the following properties:






NOTE: Starting version 3.3.2, Microsoft.CodeAnalysis.FxCopAnalyzers has been deprecated in favor of Microsoft.CodeAnalysis.NetAnalyzers. Documentation to migrate from FxCopAnalyzers to NetAnalyzers is available here.

This is a migration analyzer package for existing binary FxCop users. It contains all the ported FxCop code analysis rules (CAxxxx). It's recommended to use Microsoft.CodeAnalysis.NetAnalyzers instead. The documentation for that can be found at

The documentation for all the ported and unported FxCop rules can be found at

This analyzer package contains all the ported FxCop rules that are applicable for both .NetCore/.NetStandard and Desktop .NetFramework projects. You do not need to install any separate analyzer package from this repo to get target-framework specific FxCop rules.

The following are subpackages or NuGet dependencies that are automatically installed when you install the Microsoft.CodeAnalysis.FxCopAnalyzers package:

NOTE: Starting version 3.3.2, Microsoft.CodeQuality.Analyzers, Microsoft.NetCore.Analyzers and Microsoft.NetFramework.Analyzers have also been deprecated in favor of Microsoft.CodeAnalysis.NetAnalyzers. Documentation to migrate to NetAnalyzers is available here.


This package contains common code quality improvement rules that are not specific to usage of any particular API. For example, CA1801 (ReviewUnusedParameters) flags parameters that are unused and is part of this package.


This package contains rules for correct usage of APIs that are present in .NetCore/.NetStandard framework libraries. For example, CA1309 (UseOrdinalStringComparison) flags usages of string compare APIs that don't specify a StringComparison argument. Getting started with NetCore Analyzers

NOTE: This analyzer package is applicable for both .NetCore/.NetStandard and Desktop .NetFramework projects. If the API whose usage is being checked exists only in .NetCore/.NetStandard libraries, then the analyzer will bail out silently for Desktop .NetFramework projects. Otherwise, if the API exists in both .NetCore/.NetStandard and Desktop .NetFramework libraries, the analyzer will run correctly for both .NetCore/.NetStandard and Desktop .NetFramework projects.


This package contains rules for correct usage of APIs that are present only in Desktop .NetFramework libraries.

NOTE: The analyzers in this package will silently bail out if installed on a .NetCore/.NetStandard project that do not have the underlying API whose usage is being checked. If future versions of .NetCore/.NetStandard libraries include these APIs, the analyzers will automatically light up on .NetCore/.NetStandard projects that target these libraries.

Other Analyzer Packages


Latest pre-release version: here

This package contains rules for correct usage of APIs from the Microsoft.CodeAnalysis NuGet package, i.e. .NET Compiler Platform ("Roslyn") APIs. These are primarily aimed towards helping authors of diagnostic analyzers and code fix providers to invoke the Microsoft.CodeAnalysis APIs in a recommended manner. More info about rules in this package


Latest pre-release version: here

This package contains rules that are very specific to the .NET Compiler Platform ("Roslyn") project, i.e. dotnet/roslyn repo. This analyzer package is not intended for general consumption outside the Roslyn repo. More info about rules in this package


Latest pre-release version: here

This package contains customizable rules for identifying references to banned APIs. More info about rules in this package


Latest pre-release version: here

This package contains rules to help library authors monitoring change to their public APIs. More info about rules in this package

For instructions on using this analyzer, see Instructions.

MetaCompilation (prototype)

Created by summer 2015 interns Zoë Petard, Jessica Petty, and Daniel King

The MetaCompilation Analyzer is an analyzer that functions as a tutorial to teach users how to write an analyzer. It uses diagnostics and code fixes to guide the user through the various steps required to create a simple analyzer. It is designed for novice analyzer developers who have some previous programming experience.

For instructions on using this tutorial, see Instructions.

Getting Started

  1. Install Visual Studio 2019 or later, with at least the following workloads:
    1. .NET desktop development
    2. .NET Core cross-platform development
    3. Visual Studio extension development
  2. Clone this repository
  3. Install .NET SDK version specified in .\global.json with "dotnet": from here.
  4. Open a command prompt and go to the directory of the Roslyn Analyzer Repo
  5. Run the restore and build command: build.cmd(in the command prompt) or .\build.cmd(in PowerShell).
  6. Execute tests: test.cmd (in the command prompt) or .\test.cmd (in PowerShell).

Submitting Pull Requests

Prior to submitting a pull request, ensure the build and all tests pass using using steps 4 and 5 above.

Guidelines for contributing a new Code Analysis (CA) rule to the repo

See for contributing a new Code Analysis rule to the repo.

Versioning Scheme for Analyzer Packages

See for the versioning scheme for all analyzer packages built out of this repo.

Recommended version of Analyzer Packages

Required Visual Studio Version: Visual Studio 2019 16.9 RTW or later

Required .NET SDK Version: .NET 5.0 SDK or later

The documentation for .NET SDK Analyzers can be found here

Author: dotnet
Source Code:
License: MIT License

#analyzer #dotnet 

Roslyn Analyzers: The Compiler Platform for .NET
Veronica  Roob

Veronica Roob


PHP Analyzer: Performs Advanced Static Analysis On PHP Code

PHP Analyzer

Please report bugs or feature requests via our website support system ? in bottom right or by emailing

Contributing Stubs

PHP Analyzer uses stubs for built-in PHP classes and functions. These stubs look like regular PHP code and define the available parameters, their types, properties, methods etc. If you would like to contribute a fix or additional stubs, please fork and submit a patch to the legacy branch:

Author: scrutinizer-ci
Source Code:

#php #analyzer 

PHP Analyzer: Performs Advanced Static Analysis On PHP Code
Annie  Emard

Annie Emard


.NET Compiler Platform ("Roslyn") Diagnostic Analyzers


At Wintellect, we love anything that will help us write the best code possible. Microsoft's new Roslyn compiler is a huge step in that direction so we had to jump in and start writing analyzers and code fixes we've wanted for years. Feel free to fork and add your own favorites. We'll keep adding these as we think of them.

To add these analyzers to your project easily, use the NuGet package. In the Visual Studio Package Manager Console exeute the following:

Install-Package Wintellect.Analyzers

Design Analyzers


This warning ensures you have the AssemblyCompanyAttribute present and a filled out value in the parameter.


This warning ensures you have the AssemblyCopyrightAttribute present and a filled out value in the parameter.


This warning ensures you have the AssemblyDescriptionAttribute present and a filled out value in the parameter.


This warning ensures you have the AssemblyTitleAttribute present and a filled out value in the parameter.


This informational analyzer will report when you have a catch block that eats an exception. Because exception handling is so hard to get right, this notification is important to remind you too look at those catch blocks.

Documentation Analyzers


If you have a direct throw in your code, you need to document it with an tag in the XML documentation comments. A direct throw is one where you specifically use the throw statement in your code. This analyzer does not apply to private methods, only accessibility levels where calls outside the defining method can take place.


If you are using the SuppressionMessage attribute to suppress Code Analysis items, you need to fill out the Justification property to explicitly state why you are suppressing the report instead of fixing the code.

Formatting Analyzers


If and else statements without braces are reasons for being fired. This analyzer and code fix will help you keep your job. :) The idea for this analyzer was shown by Kevin Pilch-Bisson in his awesome TechEd talk. We just finished it off.

Performance Analyzers


This informational level check gives you a hint that you are calling a method using param arrays inside a loop. Because calls to these methods cause memory allocations you should know where these are happening.

Usage Analzyers


The predefined types, such as int, should not be used. You want to be as explicit about types as possible to avoid confusion.


Calling the one parameter overload of Debug.Assert is a bad idea because they will not show you the expression you are asserting on. This analyzer will find those calls and the code fix will take the asserting expression and convert it into a string as the second parameter to the two parameter overload of Debug.Assert.


When creating new classes, they should be declared with the the sealed modifier.


If you are returning a Task or Task from a method, that method name must end in Async.


An analyzer and code fix for inserting DebuggerDisplayAttribute onto public classes. The debugger uses the DebuggerDisplayAttribute to display the class in the expression evaluator (watch/autos/locals windows, data tips) so you can see the important information quickly. The code fix will pull in the first two properties (or fields if one or no properties are present). If the class is derived from IEnumerable, it will default to the count of items.

Author: Wintellect
Source Code:
License: View license

#csharp #analyzer #dotnet 

.NET Compiler Platform ("Roslyn") Diagnostic Analyzers

Tools for Analyzing Java JVM Gc Log Files


This package consists of two separate utilities useful for :


GC Log Visualizer

This was updated to run under Python 3 from the original at

Region Size

The python script will take a gc.log as input and return the percent of Humongous Objects that would fit into various G1RegionSize's (2mb-32mb by powers of 2).

python <gc.log>
found 858 humongous objects in /tmp/gc.log
0.00% would not be humongous with a 2mb region size (-XX:G1HeapRegionSize)
1.28% would not be humongous with a 4mb region size
5.71% would not be humongous with a 8mb region size
18.07% would not be humongous with a 16mb region size
60.96% would not be humongous with a 32mb region size
39.04% would remain humongous with a 32mb region size

GC Log Preparation

The script is for G1GC logs.

The following gc params are required for full functionality.

java 8:

-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy

Java 9:


#java #analyzer #python 

Tools for Analyzing Java JVM Gc Log Files

Creating a Custom Plugin for Dart Analyzer

Analyzer for Dart .This package provides a library that performs static analysis of Dart code. It is useful for tool integration and embedding.

End-users should use the dartanalyzer command-line tool to analyze their Dart code.

Integrators that want to add Dart support to their editor should use the Dart Analysis Server. The Analysis Server API Specification is available. If you are adding Dart support to an editor or IDE, please let us know by emailing our list.

Configuring the analyzer

Both dartanalyzer and Dart Analysis Server can be configured with an analysis_options.yaml file (using an .analysis_options file is deprecated). This YAML file can control which files and paths are analyzed, which lints are applied, and more.

If you are embedding the analyzer library in your project, you are responsible for finding the analysis options file, parsing it, and configuring the analyzer.

The analysis options file should live at the root of your project (for example, next to your pubspec.yaml). Different embedders of analyzer, such as dartanalyzer or Dart Analysis Server, may choose to find the file in various different ways. Consult their documentation to learn more.

Here is an example file that instructs the analyzer to ignore two files:

    - test/_data/p4/lib/lib1.dart
    - test/_data/p5/p5.dart
    - test/_data/bad*.dart
    - test/_brokendata/**

Note that you can use globs, as defined by the glob package.

Here is an example file that enables two lint rules:

    - camel_case_types
    - empty_constructor_bodies

Check out all the available Dart lint rules.

You can combine the analyzer section and the linter section into a single configuration. Here is an example:

    - test/_data/p4/lib/lib1.dart
    - camel_case_types

For more information, see the docs for customizing static analysis.

Who uses this library?

Many tools embed this library, such as:


Post issues and feature requests at

Questions and discussions are welcome at the Dart Analyzer Discussion Group.


The APIs in this package were originally machine generated by a translator and were based on an earlier Java implementation. Several of the API's still look like their Java predecessors rather than clean Dart APIs.

In addition, there is currently no clean distinction between public and internal APIs. We plan to address this issue but doing so will, unfortunately, require a large number of breaking changes. We will try to minimize the pain this causes for our clients, but some pain is inevitable.

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add analyzer

With Flutter:

 $ flutter pub add analyzer

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  analyzer: ^2.1.0

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:analyzer/dart/analysis/analysis_context.dart';
import 'package:analyzer/dart/analysis/analysis_context_collection.dart';
import 'package:analyzer/dart/analysis/context_builder.dart';
import 'package:analyzer/dart/analysis/context_locator.dart';
import 'package:analyzer/dart/analysis/context_root.dart';
import 'package:analyzer/dart/analysis/declared_variables.dart';
import 'package:analyzer/dart/analysis/features.dart';
import 'package:analyzer/dart/analysis/results.dart';
import 'package:analyzer/dart/analysis/session.dart';
import 'package:analyzer/dart/analysis/uri_converter.dart';
import 'package:analyzer/dart/analysis/utilities.dart';
import 'package:analyzer/dart/ast/ast.dart';
import 'package:analyzer/dart/ast/ast_factory.dart';
import 'package:analyzer/dart/ast/precedence.dart';
import 'package:analyzer/dart/ast/standard_ast_factory.dart';
import 'package:analyzer/dart/ast/syntactic_entity.dart';
import 'package:analyzer/dart/ast/token.dart';
import 'package:analyzer/dart/ast/visitor.dart';
import 'package:analyzer/dart/constant/value.dart';
import 'package:analyzer/dart/element/element.dart';
import 'package:analyzer/dart/element/nullability_suffix.dart';
import 'package:analyzer/dart/element/scope.dart';
import 'package:analyzer/dart/element/type.dart';
import 'package:analyzer/dart/element/type_provider.dart';
import 'package:analyzer/dart/element/type_system.dart';
import 'package:analyzer/dart/element/type_visitor.dart';
import 'package:analyzer/dart/element/visitor.dart';
import 'package:analyzer/dart/sdk/build_sdk_summary.dart';
import 'package:analyzer/diagnostic/diagnostic.dart';
import 'package:analyzer/error/error.dart';
import 'package:analyzer/error/listener.dart';
import 'package:analyzer/exception/exception.dart';
import 'package:analyzer/file_system/file_system.dart';
import 'package:analyzer/file_system/memory_file_system.dart';
import 'package:analyzer/file_system/overlay_file_system.dart';
import 'package:analyzer/file_system/physical_file_system.dart';
import 'package:analyzer/instrumentation/file_instrumentation.dart';
import 'package:analyzer/instrumentation/instrumentation.dart';
import 'package:analyzer/instrumentation/log_adapter.dart';
import 'package:analyzer/instrumentation/logger.dart';
import 'package:analyzer/instrumentation/multicast_service.dart';
import 'package:analyzer/instrumentation/noop_service.dart';
import 'package:analyzer/instrumentation/plugin_data.dart';
import 'package:analyzer/instrumentation/service.dart';
import 'package:analyzer/source/error_processor.dart';
import 'package:analyzer/source/line_info.dart';
import 'package:analyzer/source/source_range.dart'; 

Download Details:

Author: dart-lang

Source Code:

#dart  #analyzer 

Creating a Custom Plugin for Dart Analyzer

Max Weber


Flutter Code Linting - Improve your code and enforce Style and errors

Join me on my Live Streaming adventures -

Code Linting allows you to enforce styling and error rules onto your code and make them visible right away in your IDE. In this video, we want to talk about improving your codebase using Flutter Linting. The most important file is the analysis_options.yaml in Dart, and we take a look at how it is structured.

Useful Links:
ResoCoder -
Lint package from Pascal Welsch -
Linting Rules -

#linting #flutter #codemaintenance

00:00 Introduction
00:57 What is Linting
03:32 Include Code Listing into your project
07:11 Overview of all possible Linting Rules
07:45 Discussion about Linting or not Listing
09:47 Lint Rule I: Missing Required Parameters
11:05 Lint Rule II: prefer_const
11:47 Lint Rule III: exhaustive_cases
12:27 Dart packages for Listing rules

** New Mentorship Program to boost your Flutter career **



Our current YouTube gear
💻 MacBook Pro:
📹 Lumix FZ1000:
🎙 Samson Mic:

Satechi USB-C Adapter:
SD Card for 4k Videos:

Disclaimer Flutter Explained (Max & Mahtab) are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to

** Social Medias **

Say hi to Max
GitHub Max:

Say hi to Mahtab
Twitter Mahtab:
GitHub Mahtab:

#flutter #dart #linting #analyzer

Flutter Code Linting - Improve your code and enforce Style and errors