1667905560
Sitrep is a source code analyzer for Swift projects, giving you a high-level overview of your code:
Behind the scenes, Sitrep captures a lot more information that could be utilized – how many functions you have, how many comments (regular and documentation), how large your enums are, and more. These aren’t currently reported, but could be in a future release. It’s also written as both a library and an executable, so it can be integrated elsewhere as needed.
Sitrep is built using Apple’s SwiftSyntax, which means it parses Swift code accurately and efficiently.
Note: Please make sure that the SwiftSyntax version specified in Package.swift matches your current Swift tools version. For example, if you're using Swift tools 5.3 you need to change the spec from 0.50400.0
to 0.50300.0
.
If you want to install the Sitrep command line tool, you have three options: Homebrew, Mint, or building it from the command line yourself.
Use this command for Homebrew:
brew install twostraws/brew/sitrep
Using Homebrew allows you to run sitrep
directly from the command line.
For Mint, install and run Sitrep with these command:
mint install twostraws/Sitrep@main
mint run sitrep@main
And finally, to build and install the command line tool yourself, clone the repository and run make install
:
git clone https://github.com/twostraws/Sitrep
cd Sitrep
make install
As with the Homebrew option, building the command line tool yourself allows you to use the sitrep
command directly from the command line.
Sitrep is implemented as a library that does all the hard work of scanning and reporting, plus a small front end that handles reading and writing on the command line. As an alternative to using Sitrep from the command line, you can also use its library SitrepCore from inside your own Swift code.
First, add Sitrep as a dependency in your Package.swift
file:
let package = Package(
//...
dependencies: [
.package(url: "https://github.com/twostraws/Sitrep", .branch("master"))
],
//...
)
Then import SitrepCore
wherever you’d like to use it.
When run on the command line without any flags, Sitrep will automatically scan your current directory and print its findings as text. To control this behavior, Sitrep supports several command line flags:
-c
lets you specify a path to your .sitrep.yml configuration file, if you have one.-f
sets the output format. For example, -f json
enables JSON output. The default behavior is text output, which is equivalent to -f text
.-i
will print debug information, showing the settings Sitrep would use if a real scan were requested, then exits.-p
sets the path Sitrep should scan. This defaults to your current working directory.-h
prints command line help.You can customize the behavior of Sitrep by creating a .sitrep.yml file in the directory you wish to scan. This is a YAML file that allows you to provide permanent options for scanning this path, although right now this is limited to one thing: an array of directory names to exclude from the scan.
For example, if you wanted to exclude the .build directory and your tests, you might create a .sitrep.yml file such as this one:
excluded:
- .build
- Tests
You can ask Sitrep to use a custom configuration file using the -c
parameter, for example sitrep -c /path/to/.sitrep.yml -p /path/to/swift/project
.
Alternatively, you can use the -i
parameter to have Sitrep tell you the configuration options it would use in a real analysis run. This will print the configuration information then exit.
Sitrep is written using Swift 5.3. You can either build and run the executable directly, or integrate the SitrepCore library into your own code.
To build Sitrep, clone this repository and open Terminal in the repository root directory. Then run:
swift build
swift run sitrep -p ~/path/to/your/project/root
If you would like to keep a copy of the sitrep
executable around, find it in the .debug
directory after running swift build
.
To run Sitrep from the command line just provide it with the name of a project directory to parse – it will locate all Swift files recursively from there. Alternatively, just using sitrep
by itself will scan the current directory.
Any help you can offer with this project is most welcome, and trust me: there are opportunities big and small, so that someone with only a small amount of Swift experience can help.
Some suggestions you might want to explore:
Please ensure you write tests to accompany any code you contribute, and that SwiftLint returns no errors or warnings.
Sitrep was designed and built by Paul Hudson, and is copyright © Paul Hudson 2021. Sitrep is licensed under the Apache License v2.0 with Runtime Library Exception; for the full license please see the LICENSE file.
Sitrep is built on top of Apple’s SwiftSyntax library for parsing code, which is also available under the Apache License v2.0 with Runtime Library Exception.
Swift, the Swift logo, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries.
If you find Sitrep useful, you might find my website full of Swift tutorials equally useful: Hacking with Swift.
Author: Twostraws
Source Code: https://github.com/twostraws/Sitrep
License: Apache-2.0 license
1667598300
Tailor is a cross-platform static analysis and lint tool for source code written in Apple's Swift programming language. It analyzes your code to ensure consistent styling and help avoid bugs.
Tailor supports Swift 3.0.1 out of the box and helps enforce style guidelines outlined in the The Swift Programming Language, GitHub, Ray Wenderlich, and Coursera style guides. It supports cross-platform usage and can be run on Mac OS X via your shell or integrated with Xcode, as well as on Linux and Windows.
Tailor parses Swift source code using the primary Java target of ANTLR:
ANTLR is a powerful parser generator [ . . . ] widely used in academia and industry to build all sorts of languages, tools, and frameworks.
— About the ANTLR Parser Generator
Getting Started
Requires Java (JRE or JDK) Version 8 or above: Java SE Downloads
brew install tailor
curl -fsSL https://tailor.sh/install.sh | sh
iex (new-object net.webclient).downloadstring('https://tailor.sh/install.ps1')
You may also download Tailor via GitHub Releases, extract the archive, and symlink the tailor/bin/tailor
shell script to a location in your $PATH
.
If your continuous integration server supports Homebrew installation, you may use the following snippet:
before_install:
- brew update
- brew install tailor
In other cases, use this snippet:
Replace ${TAILOR_RELEASE_ARCHIVE}
with the URL of the release you would like to install, e.g. https://github.com/sleekbyte/tailor/releases/download/v0.1.0/tailor.tar
.
before_script:
- wget ${TAILOR_RELEASE_ARCHIVE} -O /tmp/tailor.tar
- tar -xvf /tmp/tailor.tar
- export PATH=$PATH:$PWD/tailor/bin/
Run Tailor with a list of files and directories to analyze, or via Xcode.
$ tailor [options] [--] [[file|directory] ...]
Help for Tailor is accessible via the [-h|--help]
option.
$ tailor -h
Usage: tailor [options] [--] [[file|directory] ...]
Perform static analysis on Swift source files.
Invoking Tailor with at least one file or directory will analyze all Swift files at those paths. If
no paths are provided, Tailor will analyze all Swift files found in '$SRCROOT' (if defined), which
is set by Xcode when run in a Build Phase. Tailor may be set up as an Xcode Build Phase
automatically with the --xcode option.
Options:
-c,--config=<path/to/.tailor.yml> specify configuration file
--debug print ANTLR error messages when parsing error occurs
--except=<rule1,rule2,...> run all rules except the specified ones
-f,--format=<xcode|json|cc|html> select an output format
-h,--help display help
--invert-color invert colorized console output
-l,--max-line-length=<0-999> maximum Line length (in characters)
--list-files display Swift source files to be analyzed
--max-class-length=<0-999> maximum Class length (in lines)
--max-closure-length=<0-999> maximum Closure length (in lines)
--max-file-length=<0-999> maximum File length (in lines)
--max-function-length=<0-999> maximum Function length (in lines)
--max-name-length=<0-999> maximum Identifier name length (in characters)
--max-severity=<error|warning (default)> maximum severity
--max-struct-length=<0-999> maximum Struct length (in lines)
--min-name-length=<1-999> minimum Identifier name length (in characters)
--no-color disable colorized console output
--only=<rule1,rule2,...> run only the specified rules
--purge=<1-999> reduce memory usage by clearing DFA cache after
specified number of files are parsed
--show-rules show description for each rule
-v,--version display version
--xcode=<path/to/project.xcodeproj> add Tailor Build Phase Run Script to Xcode Project
Features
Rule identifiers and "preferred/not preferred" code samples may be found on the Rules page.
Rules may be individually disabled (blacklist) or enabled (whitelist) via the --except
and --only
command-line flags.
tailor --except=brace-style,trailing-whitespace main.swift
tailor --only=redundant-parentheses,terminating-semicolon main.swift
Tailor may be used on Mac OS X via your shell or integrated with Xcode, as well as on Linux and Windows.
Tailor can be integrated with Xcode projects using the --xcode
option.
tailor --xcode /path/to/demo.xcodeproj/
This adds the following Build Phase Run Script to your project's default target.
Tailor's output will be displayed inline within the Xcode Editor Area and as a list in the Log Navigator.
Add a new configuration, say Analyze
, to the project
Modify the active scheme's Analyze
phase to use the new build configuration created above
Tweak the build phase run script to run Tailor only when analyzing the project (⇧⌘B)
if [ "${CONFIGURATION}" = "Analyze" ]; then
if hash tailor 2>/dev/null; then
tailor
else
echo "warning: Please install Tailor from https://tailor.sh"
fi
fi
Tailor uses the following color schemes to format CLI output:
Dark theme (enabled by default)
Light theme (enabled via --invert-color
option)
No color theme (enabled via --no-color
option)
--max-severity
can be used to control the maximum severity of violation messages. It can be set to error
or warning
(by default, it is set to warning
). Setting it to error
allows you to distinguish between lower and higher priority messages. It also fails the build in Xcode, if any errors are reported (similar to how a compiler error fails the build in Xcode). With max-severity
set to warning
, all violation messages are warnings and the Xcode build will never fail.
This setting also affects Tailor's exit code on the command-line, a failing build will exit 1
whereas having warnings only will exit 0
, allowing Tailor to be easily integrated into pre-commit hooks.
Violations on a specific line may be disabled with a trailing single-line comment.
import Foundation; // tailor:disable
Additionally, violations in a given block of code can be disabled by enclosing the block within tailor:off
and tailor:on
comments.
// tailor:off
import Foundation;
import UIKit;
import CoreData;
// tailor:on
class Demo() {
// Define public members here
}
// tailor:off
and // tailor:on
comments must be pairedThe behavior of Tailor can be customized via the .tailor.yml
configuration file. It enables you to
You can tell Tailor which configuration file to use by specifying its file path via the --config
CLI option. By default, Tailor will look for the configuration file in the directory where you will run Tailor from.
The file follows the YAML 1.1 format.
Tailor checks all files found by a recursive search starting from the directories given as command line arguments. However, it only analyzes Swift files that end in .swift
. If you would like Tailor to analyze specific files and directories, you will have to add entries for them under include
. Files and directories can also be ignored through exclude
.
Here is an example that might be used for an iOS project:
include:
- Source # Inspect all Swift files under "Source/"
exclude:
- '**Tests.swift' # Ignore Swift files that end in "Tests"
- Source/Carthage # Ignore Swift files under "Source/Carthage/"
- Source/Pods # Ignore Swift files under "Source/Pods/"
tailor
is run frominclude
/exclude
rules specified in .tailor.yml
to be ignoredTailor allows you to individually disable (blacklist) or enable (whitelist) rules via the except
and only
labels.
Here is an example showcasing how to enable certain rules:
# Tailor will solely check for violations to the following rules
only:
- upper-camel-case
- trailing-closure
- forced-type-cast
- redundant-parentheses
Here is an example showcasing how to disable certain rules:
# Tailor will check for violations to all rules except for the following ones
except:
- parenthesis-whitespace
- lower-camel-case
only
/except
rules specified in .tailor.yml
to be ignoredTailor allows you to specify the output format (xcode
/json
) via the format
label.
Here is an example showcasing how to specify the output format:
# The output format will now be in JSON
format: json
.tailor.yml
to be ignoredTailor allows you to specify the CLI output color schemes via the color
label. To disable colored output, set color
to disable
. To invert the color scheme, set color
to invert
.
Here is an example showcasing how to specify the CLI output color scheme:
# The CLI output will not be colored
color: disable
.tailor.yml
to be ignoredTailor's output format may be customized via the -f
/--format
option. The Xcode formatter is selected by default.
The default xcode
formatter outputs violation messages according to the format expected by Xcode to be displayed inline within the Xcode Editor Area and as a list in the Log Navigator. This format is also as human-friendly as possible on the console.
$ tailor main.swift
********** /main.swift **********
/main.swift:1: warning: [multiple-imports] Imports should be on separate lines
/main.swift:1:18: warning: [terminating-semicolon] Statements should not terminate with a semicolon
/main.swift:3:05: warning: [constant-naming] Global Constant should be either lowerCamelCase or UpperCamelCase
/main.swift:5:07: warning: [redundant-parentheses] Conditional clause should not be enclosed within parentheses
/main.swift:7: warning: [terminating-newline] File should terminate with exactly one newline character ('\n')
Analyzed 1 file, skipped 0 files, and detected 5 violations (0 errors, 5 warnings).
The json
formatter outputs an array of violation messages for each file, and a summary
object indicating the parsing results and the violation counts.
$ tailor -f json main.swift
{
"files": [
{
"path": "/main.swift",
"violations": [
{
"severity": "warning",
"rule": "constant-naming",
"location": {
"line": 1,
"column": 5
},
"message": "Global Constant should be either lowerCamelCase or UpperCamelCase"
}
],
"parsed": true
}
],
"summary": {
"violations": 1,
"warnings": 1,
"analyzed": 1,
"errors": 0,
"skipped": 0
}
}
The html
formatter outputs a complete HTML document that should be written to a file.
tailor -f html main.swift > tailor.html
Developers
Please review the guidelines for contributing to this repository.
./gradlew
may be used instead)External Tools and Libraries
Author: Sleekbyte
Source Code: https://github.com/sleekbyte/tailor
License: MIT license
1666216800
Build Time Analyzer is a macOS app that shows you a break down of Swift build times. See this post and this post on Medium for context.
Open up the app and follow the instructions.
Download the code and open it in Xcode, archive the project and export the build. Easy, right?
If you encounter any issues or have ideas for improvement, I am open to code contributions.
Author: RobertGummesson
Source Code: https://github.com/RobertGummesson/BuildTimeAnalyzer-for-Xcode
License: MIT license
1663387268
In today's post we will learn about 10 Popular Golang Libraries for Morphological Analyzers.
What is Morphological Analyzers?
Morphological Analyzer is a program for analyzing the morphology of an input word; the analyzer reads the inflected surface form of each word in a text and provides its lexical form while Generation is the inverse process. Both Analysis and Generation make use of lexicon.
Table of contents:
This is a fairly straightforward port of Martin Porter's C implementation of the Porter stemming algorithm.
The original algorithm is described in the paper:
M.F. Porter, 1980, An algorithm for suffix stripping, Program, 14(3) pp
130-137.
While the internal implementation and interface is nearly identical to the original implementation, the Go interface is much simplified. The stemmer can be called as follows:
import "porter"
...
stemmed := porter.Stem(word_to_stem)
go get github.com/a2800276/porter
to use the stemmer when installed using goinstall, import:
import "github.com/a2800276/porter"
While the implementation is fairly robust, this is a work in progress. In particular, a new interface will likely be provided to prevent excessive conversions between string
s and []byte
. Currently, on calling Stem
the string argument is converted to a byte slice which the algorithm works on and is converted back into a string before returning.
Also, the implementation is not particularly robust at handling Unicode input, currently, only bytes with the high bit set are ignored. It's up to the caller to make sure the string contains only ASCII characters. Since the algorithm itself operates on English words only, this doens't restrict the functionality, but it is nuisance.
Reader and utility functions for word2vec embeddings.
This is a package for reading word2vec vectors in Go and finding similar words and analogies.
This package can be installed with the go command:
go get gopkg.in/danieldk/go2vec.v1
To install the command-line utilities, use:
go get gopkg.in/danieldk/go2vec.v1/cmd/...
The package documentation is available at: https://godoc.org/gopkg.in/danieldk/go2vec.v1
Go bindings for the snowball libstemmer library including porter 2.
This simple library provides Go (golang) bindings for the snowball libstemmer library including the popular porter and porter2 algorithms.
You'll need the development package of libstemmer, usually this is simply a matter of:
sudo apt-get install libstemmer-dev
... or you might need to install it from source.
First, ensure you have your GOPATH env variable set to the root of your Go project:
export GOPATH=`pwd`
export PATH=$PATH:$GOPATH/bin
Then this cute statement should do the trick:
go get github.com/rjohnsondev/golibstemmer
Basic usage:
package main
import "github.com/rjohnsondev/golibstemmer"
import "fmt"
import "os"
func main() {
s, err := stemmer.NewStemmer("english")
defer s.Close()
if err != nil {
fmt.Println("Error creating stemmer: "+err.Error())
os.Exit(1)
}
word := s.StemWord("happy")
fmt.Println(word)
}
To get a list of supported stemming algorithms:
list := stemmer.GetSupportedLanguages()
Sentiment analyzer using sentiwordnet lexicon in Go.
Sentiment analyzer using sentiwordnet lexicon in Go. This library produce sentiment score for each word, including positive, negative, and objective score.
First of all, download and install Go 1.14
or higher is required.
Install this library using the go get
command:
$ go get github.com/dinopuguh/gosentiwordnet/v2
package main
import (
"fmt"
goswn "github.com/dinopuguh/gosentiwordnet/v2"
)
func main() {
sa := goswn.New()
scores, exist := sa.GetSentimentScore("love", "v", "2")
if exist {
fmt.Println("💬 Sentiment score:", scores) // => 💬 Sentiment score: {1 0 0}
}
}
The GetSentimentScore
required 3 parameters(word, pos-tag, and word usage):
Go implementation of VADER Sentiment Analysis.
GoVader: Vader sentiment analysis in Go
This is a port of https://github.com/cjhutto/vaderSentiment from Python to Go.
There are tests which check it gives the same answers as the original package.
Usage:
import (
"fmt"
"github.com/jonreiter/govader"
)
analyzer := govader.NewSentimentIntensityAnalyzer()
sentiment := analyzer.PolarityScores("Usage is similar to all the other ports.")
fmt.Println("Compound score:", sentiment.Compound)
fmt.Println("Positive score:", sentiment.Positive)
fmt.Println("Neutral score:", sentiment.Neutral)
fmt.Println("Negative score:", sentiment.Negative)
A server wrapper is available in https://github.com/PIMPfiction/govader_backend.
Microservice implementation of GoVader.
Govader-Backend is a microservice thats returns sentimental analysis of given sentence.
go get github.com/PIMPfiction/govader_backend
package main
import (
vaderMicro "github.com/PIMPfiction/govader_backend"
echo "github.com/labstack/echo/v4"
"fmt"
)
func main() {
e := echo.New()
err := vaderMicro.Serve(e, "8080")
if err != nil {
panic(err)
}
fmt.Scanln()
}
{"text": "I am looking good"}
{
"Negative": 0,
"Neutral": 0.5084745762711864,
"Positive": 0.4915254237288135,
"Compound": 0.44043357076016854
}
JP morphological analyzer written in pure Go.
Kagome is an open source Japanese morphological analyzer written in pure golang. The dictionary/statistical models such as MeCab-IPADIC, UniDic (unidic-mecab) and so on, are able to be embedded in binaries.
Programming example
package main
import (
"fmt"
"strings"
"github.com/ikawaha/kagome-dict/ipa"
"github.com/ikawaha/kagome/v2/tokenizer"
)
func main() {
t, err := tokenizer.New(ipa.Dict(), tokenizer.OmitBosEos())
if err != nil {
panic(err)
}
// wakati
fmt.Println("---wakati---")
seg := t.Wakati("すもももももももものうち")
fmt.Println(seg)
// tokenize
fmt.Println("---tokenize---")
tokens := t.Tokenize("すもももももももものうち")
for _, token := range tokens {
features := strings.Join(token.Features(), ",")
fmt.Printf("%s\t%v\n", token.Surface, features)
}
}
output:
---wakati---
[すもも も もも も もも の うち]
---tokenize---
すもも 名詞,一般,*,*,*,*,すもも,スモモ,スモモ
も 助詞,係助詞,*,*,*,*,も,モ,モ
もも 名詞,一般,*,*,*,*,もも,モモ,モモ
も 助詞,係助詞,*,*,*,*,も,モ,モ
もも 名詞,一般,*,*,*,*,もも,モモ,モモ
の 助詞,連体化,*,*,*,*,の,ノ,ノ
うち 名詞,非自立,副詞可能,*,*,*,うち,ウチ,ウチ
Go
Go 1.16 or later.
go install github.com/ikawaha/kagome/v2@latest
Or use go get
.
env GO111MODULE=on go get -u github.com/ikawaha/kagome/v2
Homebrew tap
brew install ikawaha/kagome/kagome
Cgo binding for libtextcat C library. Guaranteed compatibility with version 2.2.
Installation
Installation consists of several simple steps. They may be a bit different on your target system (e.g. require more permissions) so adapt them to the parameters of your system.
NOTE: If this link is not working or there are some problems with downloading, there is a stable version 2.2 snapshot saved in Downloads.
From the directory, where you unarchived libtextcat, run:
./configure
make
sudo make install
sudo ldconfig
go get github.com/goodsign/libtextcat
go test github.com/goodsign/libtextcat (must PASS)
Installation notes
Make sure that you have your local library paths set correctly and that installation was successful. Otherwise, go build or go test may fail.
libtextcat is installed in your local library directory (e.g. /usr/local/lib) and puts its libraries there. This path should be registered in your system (using ldconfig or exporting LD_LIBRARY_PATH, etc.) or the linker would fail.
Usage
cat, err := NewTextCat(ConfigPath) // See 'Usage notes' section
if nil != err {
// ... Handle error ...
}
defer cat.Close()
matches, err := cat.Classify(text)
if nil != err {
// ... Handle error ...
}
// Use matches.
// NOTE: matches[0] is the best match.
Extract values from strings and fill your structs with nlp.
nlp
is a general purpose any-lang Natural Language Processor that parses the data inside a text and returns a filled model
int int8 int16 int32 int64
uint uint8 uint16 uint32 uint64
float32 float64
string
time.Time
time.Duration
// go1.8+ is required
go get -u github.com/shixzie/nlp
Feel free to create PR's and open Issues :)
You will always begin by creating a NL type calling nlp.New(), the NL type is a Natural Language Processor that owns 3 funcs, RegisterModel(), Learn() and P().
RegisterModel takes 3 parameters, an empty struct, a set of samples and some options for the model.
The empty struct lets nlp know all possible values inside the text, for example:
type Song struct {
Name string // fields must be exported
Artist string
ReleasedAt time.Time
}
err := nl.RegisterModel(Song{}, someSamples, nlp.WithTimeFormat("2006"))
if err != nil {
panic(err)
}
// ...
tells nlp that inside the text may be a Song.Name, a Song.Artist and a Song.ReleasedAt.
The samples are the key part about nlp, not just because they set the limits between keywords but also because they will be used to choose which model use to handle an expression.
Samples must have a special syntax to set those limits and keywords.
songSamples := []string{
"play {Name} by {Artist}",
"play {Name} from {Artist}",
"play {Name}",
"from {Artist} play {Name}",
"play something from {ReleasedAt}",
}
In the example below, you can see we're reffering to the Name and Artist fields of the Song
type declared above, both {Name}
and {Artist}
are our keywords and yes! you guessed it! Everything between play
and by
will be treated as a {Name}
, and everything that's after by
will be treated as an {Artist}
meaning that play
and by
are our limits.
limits
┌─────┴─────┐
┌┴─┐ ┌┴┐
play {Name} by {Artist}
└─┬──┘ └───┬──┘
└──────┬─────┘
keywords
Any character can be a limit, a ,
for example can be used as a limit.
keywords as well as limits are CaseSensitive
so be sure to type them right.
Note that putting 2 keywords together will cause that only 1 or none of them will be detected
limits are important - Me :3
Go Natural Language Processing library supporting LSA (Latent Semantic Analysis).
Implementations of selected machine learning algorithms for natural language processing in golang. The primary focus for the package is the statistical semantics of plain-text documents supporting semantic analysis and retrieval of semantically similar documents.
Built upon the Gonum package for linear algebra and scientific computing with some inspiration taken from Python's scikit-learn and Gensim.
Check out the companion blog post or the Go documentation page for full usage and examples.
Thank you for following this article.
Golang Web Frameworks You MUST Learn
1652461380
checkmake
checkmake is an experimental tool for linting and checking Makefiles. It may not do what you want it to.
% checkmake Makefile
% checkmake --help
checkmake.
Usage:
checkmake [--debug|--config=<configPath>] <makefile>
checkmake -h | --help
checkmake --version
Options:
-h --help Show this screen.
--version Show version.
--debug Enable debug mode
--config=<configPath> Configuration file to read
--list-rules List registered rules
% checkmake fixtures/missing_phony.make
RULE DESCRIPTION LINE NUMBER
minphony Missing required phony target 0
"all"
minphony Missing required phony target 0
"test"
phonydeclared Target "all" should be 18
declared PHONY.
Build the image, or pull it:
docker build --build-arg BUILDER_NAME='Your Name' --build-arg BUILDER_EMAIL=your.name@example.com . -t checker
Then run it with your Makefile attached, below is an example of it assuming the Makefile is in your current working directory:
docker run -v "$PWD"/Makefile:/Makefile checker
The pandoc document converter utility is required to run checkmate. You can find out if you have it via which pandoc
. Install pandoc if the command was not found.
There are packages for linux up on packagecloud.io or build it yourself with the steps below.
To build checkmake you will need to have golang installed. Once you have Go installed, you can simply clone the repo and build the binary and man page yourself with the following commands.
git clone https://github.com/mrtazz/checkmake
cd checkmake
make
This is totally inspired by an idea by Dan Buch.
Author: mrtazz
Source Code: https://github.com/mrtazz/checkmake
License: MIT License
1652441100
Manalyze
My work on Manalyze started when my antivirus tried to quarantine my malware sample collection for the thirtieth time. It is also born from my increasing frustration with AV products which make decisions without ever explaining why they deem a file malicious. Obviously, most people are better off having an antivirus decide what's best for them. But it seemed to me that expert users (i.e. malware analysts) could use a tool which would analyze a PE executable, provide as many data as possible, and leave the final call to them.
If you want to see some sample reports generated by the tool, feel free to try out the web service I created for it: manalyzer.org.
Manalyze was written in C++ for Windows and Linux and is released under the terms of the GPLv3 license. It is a robust parser for PE files with a flexible plugin architecture which allows users to statically analyze files in-depth. Manalyze...
WriteProcessMemory
+ CreateRemoteThread
)There are few things I hate more than checking out an open-source project and spending two hours trying to build it. This is why I did my best to make Manalyze as easy to build as possible. If these few lines don't work for you, then I have failed at my job and you should drop me a line so I can fix this.
$> [sudo or as root] apt-get install libboost-regex-dev libboost-program-options-dev libboost-system-dev libboost-filesystem-dev libssl-dev build-essential cmake git
$> [alternatively, also sudo or as root] pkg install boost-libs-1.55.0_8 libressl cmake git
$> git clone https://github.com/JusticeRage/Manalyze.git && cd Manalyze
$> cmake .
$> make -j5
$> cd bin && ./manalyze --version
Finally, if you want to access Manalyze from every directory on your machine, install it using $> make install
from the root folder of the project.
cd boost_1_XX_0 && ./bootstrap.bat && ./b2.exe --build-type=complete --with-regex --with-program_options --with-system --with-filesystem
BOOST_ROOT
which contains the path to your boost_1_XX_0
folder.git clone https://github.com/JusticeRage/Manalyze.git && cd Manalyze && cmake .
manalyze.sln
should have appeared in the Manalyze
folder!# Skip these two lines if you already have a sane build environment
user$ xcode-select --install
user$ sudo installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg -target /
user$ git clone https://github.com/JusticeRage/Manalyze.git && cd Manalyze
user$ brew install openssl boost
user$ cmake . -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl/ && make -j5
user$ bin && ./manalyze --version
If you need to build Manalyze on a machine with no internet access, you have to manually check out the following projects:
Place the two folders in the external
folder as external/yara
and external/hash-library
respectively. Then run cmake . -DGitHub=OFF
and continue as you normally would.
A Docker image for Manalyze is provided by the community. Run docker pull evanowe/manalyze
and get additional information here.
Since ClamAV signatures are voluminous and updated regularly, it didn't make a lot of sense to distribute them from GitHub or with the binary. When you try using the ClamAV plugin for the first time, you will likely encounter the following error message: [!] Error: Could not load yara_rules/clamav.yara
. In order to generate them, simply run the update_clamav_signatures.py
Python script located in bin/yara_rules
.
Run the script whenever you want to refresh the signatures.
$ ./manalyze.exe --help
Usage:
-h [ --help ] Displays this message.
-v [ --version ] Prints the program's version.
--pe arg The PE to analyze. Also accepted as a positional
argument. Multiple files may be specified.
-r [ --recursive ] Scan all files in a directory (subdirectories will be
ignored).
-o [ --output ] arg The output format. May be 'raw' (default) or 'json'.
-d [ --dump ] arg Dump PE information. Available choices are any
combination of: all, summary, dos (dos header), pe (pe
header), opt (pe optional header), sections, imports,
exports, resources, version, debug, tls, config, delay, rich
--hashes Calculate various hashes of the file (may slow down the
analysis!)
-x [ --extract ] arg Extract the PE resources to the target directory.
-p [ --plugins ] arg Analyze the binary with additional plugins. (may slow
down the analysis!)
Available plugins:
- clamav: Scans the binary with ClamAV virus definitions.
- compilers: Tries to determine which compiler generated the binary.
- peid: Returns the PEiD signature of the binary.
- strings: Looks for suspicious strings (anti-VM, process names...).
- findcrypt: Detects embedded cryptographic constants.
- packer: Tries to structurally detect packer presence.
- imports: Looks for suspicious imports.
- resources: Analyzes the program's resources.
- mitigation: Displays the enabled exploit mitigation techniques (DEP, ASLR, etc.).
- overlay: Analyzes data outside of the PE's boundaries.
- authenticode: Checks if the digital signature of the PE is valid.
- virustotal: Checks existing AV results on VirusTotal.
- all: Run all the available plugins.
Examples:
manalyze.exe program.exe
manalyze.exe -dresources -dexports -x out/ program.exe
manalyze.exe --dump=imports,sections --hashes program.exe
manalyze.exe -r malwares/ --plugins=peid,clamav --dump all
Contact me or open a pull request if you would like to be added to this list!
Author: JusticeRage
Source Code: https://github.com/JusticeRage/Manalyze
License: GPL-3.0 License
1652387280
BinSkim Binary Analyzer
This repository contains the source code for BinSkim, a Portable Executable (PE) light-weight scanner that validates compiler/linker settings and other security-relevant binary characteristics.
src\BinSkim.sln
to develop changes for contribution.BuildAndTest.cmd
at the root of the enlistment to ensure that all tests pass, release build succeeds, and NuGet packages are createdIf you only want to run the Binskim tool without installing anything, then you can
Argument (short form, long form) | Meaning |
---|---|
--sympath | Symbols path value (e.g. SRV http://msdl.microsoft.com/download/symbols or Cache d:\symbols;Srv http://symweb ) |
-o, --output | File path used to write and output analysis using SARIF |
-r, --recurse | Recurse into subdirectories when evaluating file specifier arguments |
-c, --config | (Default: ‘default’) Path to policy file to be used to configure analysis. Passing value of 'default' (or omitting the argument) invokes built-in settings |
-q, --quiet | Do not log results to the console |
-s, --statistics | Generate timing and other statistics for analysis session |
-h, --hashes | Output hashes of analysis targets when emitting SARIF reports |
-e, --environment | Log machine environment details of run to output file. WARNING: This option records potentially sensitive information (such as all environment variable values) to the log file. |
-p, --plugin | Path to plugin that will be invoked against all targets in the analysis set. |
--level | Filter output of scan results to one or more failure levels. Valid values: Error, Warning and Note. |
--kind | Filter output one or more result kinds. Valid values: Fail (for literal scan results), Pass, Review, Open, NotApplicable and Informational. |
--trace | Execution traces, expressed as a semicolon-delimited list, that should be emitted to the console and log file (if appropriate). Valid values: PdbLoad. |
--help | Table of argument information. |
--version | BinSkim version details. |
value pos. 0 | One or more specifiers to a file, directory, or filter pattern that resolves to one or more binaries to analyze. |
Example: binskim.exe analyze c:\bld\*.dll --recurse --output MyRun.sarif
Author: Microsoft
Source Code: https://github.com/Microsoft/binskim
License: View license
1652262060
Roslyn Analyzers
Roslyn is the compiler platform for .NET. It consists of the compiler itself and a powerful set of APIs to interact with the compiler. The Roslyn platform is hosted at github.com/dotnet/roslyn.
Roslyn analyzers analyze your code for style, quality and maintainability, design and other issues. The documentation for Roslyn Analyzers can be found at docs.microsoft.com/dotnet/fundamentals/code-analysis/overview.
Microsoft created a set of analyzers called Microsoft.CodeAnalysis.NetAnalyzers that contains the most important "FxCop" rules from static code analysis, converted to Roslyn analyzers, in addition to more analyzers. These analyzers check your code for security, performance, and design issues, among others. The documentation for .NET analyzers can be found here.
Recently the set of analyzer packages produced by this repository have been consolidated. The following table summarizes this information:
NuGet Package Name | Summary | |
---|---|---|
Microsoft.CodeAnalysis.NetAnalyzers | ✔️ Primary analyzer package for this repo. Included default for .NET 5+. For earlier targets read more. | |
Microsoft.CodeAnalysis.BannedApiAnalyzers | ✔️ Allows banning use of arbitrary code. Read more. | |
Microsoft.CodeAnalysis.PublicApiAnalyzers | ✔️ Helps library authors monitor changes to their public APIs. Read more. | |
Microsoft.CodeAnalysis.Analyzers | ⚠️ Intended projects providing analyzers and code fixes. Read more. | |
Roslyn.Diagnostics.Analyzers | ⚠️ Rules specific to the Roslyn project, not intended for general consumption. Read more. | |
Microsoft.CodeAnalysis.FxCopAnalyzers | ⛔ Use Microsoft.CodeAnalysis.NetAnalyzers instead. Read more. | |
Microsoft.CodeQuality.Analyzers | ⛔ Use Microsoft.CodeAnalysis.NetAnalyzers instead. Read more. | |
Microsoft.NetCore.Analyzers | ⛔ Use Microsoft.CodeAnalysis.NetAnalyzers instead. Read more. | |
Microsoft.NetFramework.Analyzers | ⛔ Use Microsoft.CodeAnalysis.NetAnalyzers instead. Read more. |
Latest pre-release version (.NET6 analyzers): here
Latest pre-release version (.NET7 analyzers): here
This is the primary analyzer package for this repo that contains all the .NET code analysis rules (CAxxxx) that are built into the .NET SDK starting .NET5 release. The documentation for CA rules can be found at docs.microsoft.com/visualstudio/code-quality/code-analysis-for-managed-code-warnings.
You do not need to manually install this NuGet package to your project if you are using .NET5 SDK or later. These analyzers are enabled by default for projects targeting .NET5 or later. For projects targeting earlier .NET frameworks, you can enable them in your MSBuild project file by setting one of the following properties:
EnableNETAnalyzers
<PropertyGroup>
<EnableNETAnalyzers>true</EnableNETAnalyzers>
</PropertyGroup>
AnalysisLevel
<PropertyGroup>
<AnalysisLevel>latest</AnalysisLevel>
</PropertyGroup>
NOTE: Starting version 3.3.2
, Microsoft.CodeAnalysis.FxCopAnalyzers
has been deprecated in favor of Microsoft.CodeAnalysis.NetAnalyzers
. Documentation to migrate from FxCopAnalyzers to NetAnalyzers is available here.
This is a migration analyzer package for existing binary FxCop users. It contains all the ported FxCop code analysis rules (CAxxxx). It's recommended to use Microsoft.CodeAnalysis.NetAnalyzers instead. The documentation for that can be found at docs.microsoft.com/visualstudio/code-quality/install-net-analyzers.
The documentation for all the ported and unported FxCop rules can be found at docs.microsoft.com/en-us/visualstudio/code-quality/fxcop-rule-port-status.
This analyzer package contains all the ported FxCop rules that are applicable for both .NetCore/.NetStandard and Desktop .NetFramework projects. You do not need to install any separate analyzer package from this repo to get target-framework specific FxCop rules.
NOTE: Starting version 3.3.2
, Microsoft.CodeQuality.Analyzers
, Microsoft.NetCore.Analyzers
and Microsoft.NetFramework.Analyzers
have also been deprecated in favor of Microsoft.CodeAnalysis.NetAnalyzers
. Documentation to migrate to NetAnalyzers is available here.
This package contains common code quality improvement rules that are not specific to usage of any particular API. For example, CA1801 (ReviewUnusedParameters) flags parameters that are unused and is part of this package.
This package contains rules for correct usage of APIs that are present in .NetCore/.NetStandard framework libraries. For example, CA1309 (UseOrdinalStringComparison) flags usages of string compare APIs that don't specify a StringComparison
argument. Getting started with NetCore Analyzers
NOTE: This analyzer package is applicable for both .NetCore/.NetStandard and Desktop .NetFramework projects. If the API whose usage is being checked exists only in .NetCore/.NetStandard libraries, then the analyzer will bail out silently for Desktop .NetFramework projects. Otherwise, if the API exists in both .NetCore/.NetStandard and Desktop .NetFramework libraries, the analyzer will run correctly for both .NetCore/.NetStandard and Desktop .NetFramework projects.
This package contains rules for correct usage of APIs that are present only in Desktop .NetFramework libraries.
NOTE: The analyzers in this package will silently bail out if installed on a .NetCore/.NetStandard project that do not have the underlying API whose usage is being checked. If future versions of .NetCore/.NetStandard libraries include these APIs, the analyzers will automatically light up on .NetCore/.NetStandard projects that target these libraries.
Latest pre-release version: here
This package contains rules for correct usage of APIs from the Microsoft.CodeAnalysis NuGet package, i.e. .NET Compiler Platform ("Roslyn") APIs. These are primarily aimed towards helping authors of diagnostic analyzers and code fix providers to invoke the Microsoft.CodeAnalysis APIs in a recommended manner. More info about rules in this package
Latest pre-release version: here
This package contains rules that are very specific to the .NET Compiler Platform ("Roslyn") project, i.e. dotnet/roslyn repo. This analyzer package is not intended for general consumption outside the Roslyn repo. More info about rules in this package
Latest pre-release version: here
This package contains customizable rules for identifying references to banned APIs. More info about rules in this package
Latest pre-release version: here
This package contains rules to help library authors monitoring change to their public APIs. More info about rules in this package
For instructions on using this analyzer, see Instructions.
Created by summer 2015 interns Zoë Petard, Jessica Petty, and Daniel King
The MetaCompilation Analyzer is an analyzer that functions as a tutorial to teach users how to write an analyzer. It uses diagnostics and code fixes to guide the user through the various steps required to create a simple analyzer. It is designed for novice analyzer developers who have some previous programming experience.
For instructions on using this tutorial, see Instructions.
.\global.json
with "dotnet":
from here.build.cmd
(in the command prompt) or .\build.cmd
(in PowerShell).test.cmd
(in the command prompt) or .\test.cmd
(in PowerShell).Prior to submitting a pull request, ensure the build and all tests pass using using steps 4 and 5 above.
See GuidelinesForNewRules.md for contributing a new Code Analysis rule to the repo.
See VERSIONING.md for the versioning scheme for all analyzer packages built out of this repo.
Required Visual Studio Version: Visual Studio 2019 16.9 RTW or later
Required .NET SDK Version: .NET 5.0 SDK or later
The documentation for .NET SDK Analyzers can be found here
Author: dotnet
Source Code: https://github.com/dotnet/roslyn-analyzers
License: MIT License
1650600360
PHP Analyzer
Please report bugs or feature requests via our website support system ?
in bottom right or by emailing support@scrutinizer-ci.com
.
PHP Analyzer uses stubs for built-in PHP classes and functions. These stubs look like regular PHP code and define the available parameters, their types, properties, methods etc. If you would like to contribute a fix or additional stubs, please fork and submit a patch to the legacy branch:
https://github.com/scrutinizer-ci/php-analyzer/tree/legacy/res
Author: scrutinizer-ci
Source Code: https://github.com/scrutinizer-ci/php-analyzer
1650340080
Wintellect.Analyzers
At Wintellect, we love anything that will help us write the best code possible. Microsoft's new Roslyn compiler is a huge step in that direction so we had to jump in and start writing analyzers and code fixes we've wanted for years. Feel free to fork and add your own favorites. We'll keep adding these as we think of them.
To add these analyzers to your project easily, use the NuGet package. In the Visual Studio Package Manager Console exeute the following:
Install-Package Wintellect.Analyzers
This warning ensures you have the AssemblyCompanyAttribute present and a filled out value in the parameter.
This warning ensures you have the AssemblyCopyrightAttribute present and a filled out value in the parameter.
This warning ensures you have the AssemblyDescriptionAttribute present and a filled out value in the parameter.
This warning ensures you have the AssemblyTitleAttribute present and a filled out value in the parameter.
This informational analyzer will report when you have a catch block that eats an exception. Because exception handling is so hard to get right, this notification is important to remind you too look at those catch blocks.
If you have a direct throw in your code, you need to document it with an tag in the XML documentation comments. A direct throw is one where you specifically use the throw statement in your code. This analyzer does not apply to private methods, only accessibility levels where calls outside the defining method can take place.
If you are using the SuppressionMessage attribute to suppress Code Analysis items, you need to fill out the Justification property to explicitly state why you are suppressing the report instead of fixing the code.
If and else statements without braces are reasons for being fired. This analyzer and code fix will help you keep your job. :) The idea for this analyzer was shown by Kevin Pilch-Bisson in his awesome TechEd talk. We just finished it off.
This informational level check gives you a hint that you are calling a method using param arrays inside a loop. Because calls to these methods cause memory allocations you should know where these are happening.
The predefined types, such as int, should not be used. You want to be as explicit about types as possible to avoid confusion.
Calling the one parameter overload of Debug.Assert is a bad idea because they will not show you the expression you are asserting on. This analyzer will find those calls and the code fix will take the asserting expression and convert it into a string as the second parameter to the two parameter overload of Debug.Assert.
When creating new classes, they should be declared with the the sealed modifier.
If you are returning a Task or Task from a method, that method name must end in Async.
An analyzer and code fix for inserting DebuggerDisplayAttribute onto public classes. The debugger uses the DebuggerDisplayAttribute to display the class in the expression evaluator (watch/autos/locals windows, data tips) so you can see the important information quickly. The code fix will pull in the first two properties (or fields if one or no properties are present). If the class is derived from IEnumerable, it will default to the count of items.
Author: Wintellect
Source Code: https://github.com/Wintellect/Wintellect.Analyzers
License: View license
1639698360
This package consists of two separate utilities useful for :
This was updated to run under Python 3 from the original at https://github.com/hubspot/gc_log_visualizer
The python script regionsize.py will take a gc.log as input and return the percent of Humongous Objects that would fit into various G1RegionSize's (2mb-32mb by powers of 2).
python regionsize.py <gc.log>
found 858 humongous objects in /tmp/gc.log
0.00% would not be humongous with a 2mb region size (-XX:G1HeapRegionSize)
1.28% would not be humongous with a 4mb region size
5.71% would not be humongous with a 8mb region size
18.07% would not be humongous with a 16mb region size
60.96% would not be humongous with a 32mb region size
39.04% would remain humongous with a 32mb region size
The script is for G1GC logs.
The following gc params are required for full functionality.
java 8:
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy
Java 9:
-Xlog:gc*
1629602646
Analyzer for Dart .This package provides a library that performs static analysis of Dart code. It is useful for tool integration and embedding.
End-users should use the dartanalyzer command-line tool to analyze their Dart code.
Integrators that want to add Dart support to their editor should use the Dart Analysis Server. The Analysis Server API Specification is available. If you are adding Dart support to an editor or IDE, please let us know by emailing our list.
Both dartanalyzer and Dart Analysis Server can be configured with an analysis_options.yaml file (using an .analysis_options file is deprecated). This YAML file can control which files and paths are analyzed, which lints are applied, and more.
If you are embedding the analyzer library in your project, you are responsible for finding the analysis options file, parsing it, and configuring the analyzer.
The analysis options file should live at the root of your project (for example, next to your pubspec.yaml). Different embedders of analyzer, such as dartanalyzer or Dart Analysis Server, may choose to find the file in various different ways. Consult their documentation to learn more.
Here is an example file that instructs the analyzer to ignore two files:
analyzer:
exclude:
- test/_data/p4/lib/lib1.dart
- test/_data/p5/p5.dart
- test/_data/bad*.dart
- test/_brokendata/**
Note that you can use globs, as defined by the glob package.
Here is an example file that enables two lint rules:
linter:
rules:
- camel_case_types
- empty_constructor_bodies
Check out all the available Dart lint rules.
You can combine the analyzer section and the linter section into a single configuration. Here is an example:
analyzer:
exclude:
- test/_data/p4/lib/lib1.dart
linter:
rules:
- camel_case_types
For more information, see the docs for customizing static analysis.
Many tools embed this library, such as:
Post issues and feature requests at https://github.com/dart-lang/sdk/issues
Questions and discussions are welcome at the Dart Analyzer Discussion Group.
The APIs in this package were originally machine generated by a translator and were based on an earlier Java implementation. Several of the API's still look like their Java predecessors rather than clean Dart APIs.
In addition, there is currently no clean distinction between public and internal APIs. We plan to address this issue but doing so will, unfortunately, require a large number of breaking changes. We will try to minimize the pain this causes for our clients, but some pain is inevitable.
Run this command:
With Dart:
$ dart pub add analyzer
With Flutter:
$ flutter pub add analyzer
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):
dependencies:
analyzer: ^2.1.0
Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:analyzer/dart/analysis/analysis_context.dart';
import 'package:analyzer/dart/analysis/analysis_context_collection.dart';
import 'package:analyzer/dart/analysis/context_builder.dart';
import 'package:analyzer/dart/analysis/context_locator.dart';
import 'package:analyzer/dart/analysis/context_root.dart';
import 'package:analyzer/dart/analysis/declared_variables.dart';
import 'package:analyzer/dart/analysis/features.dart';
import 'package:analyzer/dart/analysis/results.dart';
import 'package:analyzer/dart/analysis/session.dart';
import 'package:analyzer/dart/analysis/uri_converter.dart';
import 'package:analyzer/dart/analysis/utilities.dart';
import 'package:analyzer/dart/ast/ast.dart';
import 'package:analyzer/dart/ast/ast_factory.dart';
import 'package:analyzer/dart/ast/precedence.dart';
import 'package:analyzer/dart/ast/standard_ast_factory.dart';
import 'package:analyzer/dart/ast/syntactic_entity.dart';
import 'package:analyzer/dart/ast/token.dart';
import 'package:analyzer/dart/ast/visitor.dart';
import 'package:analyzer/dart/constant/value.dart';
import 'package:analyzer/dart/element/element.dart';
import 'package:analyzer/dart/element/nullability_suffix.dart';
import 'package:analyzer/dart/element/scope.dart';
import 'package:analyzer/dart/element/type.dart';
import 'package:analyzer/dart/element/type_provider.dart';
import 'package:analyzer/dart/element/type_system.dart';
import 'package:analyzer/dart/element/type_visitor.dart';
import 'package:analyzer/dart/element/visitor.dart';
import 'package:analyzer/dart/sdk/build_sdk_summary.dart';
import 'package:analyzer/diagnostic/diagnostic.dart';
import 'package:analyzer/error/error.dart';
import 'package:analyzer/error/listener.dart';
import 'package:analyzer/exception/exception.dart';
import 'package:analyzer/file_system/file_system.dart';
import 'package:analyzer/file_system/memory_file_system.dart';
import 'package:analyzer/file_system/overlay_file_system.dart';
import 'package:analyzer/file_system/physical_file_system.dart';
import 'package:analyzer/instrumentation/file_instrumentation.dart';
import 'package:analyzer/instrumentation/instrumentation.dart';
import 'package:analyzer/instrumentation/log_adapter.dart';
import 'package:analyzer/instrumentation/logger.dart';
import 'package:analyzer/instrumentation/multicast_service.dart';
import 'package:analyzer/instrumentation/noop_service.dart';
import 'package:analyzer/instrumentation/plugin_data.dart';
import 'package:analyzer/instrumentation/service.dart';
import 'package:analyzer/source/error_processor.dart';
import 'package:analyzer/source/line_info.dart';
import 'package:analyzer/source/source_range.dart';
Download Details:
Author: dart-lang
Source Code: https://github.com/dart-lang/sdk
1610614800
Join me on my Live Streaming adventures - https://twitch.tv/maxflutter
Code Linting allows you to enforce styling and error rules onto your code and make them visible right away in your IDE. In this video, we want to talk about improving your codebase using Flutter Linting. The most important file is the analysis_options.yaml in Dart, and we take a look at how it is structured.
Useful Links:
ResoCoder - https://www.youtube.com/channel/UCSIv…
Lint package from Pascal Welsch - https://pub.dev/packages/lint
Linting Rules - https://dart-lang.github.io/linter/li…
#linting #flutter #codemaintenance
Timeline
00:00 Introduction
00:57 What is Linting
03:32 Include Code Listing into your project
07:11 Overview of all possible Linting Rules
07:45 Discussion about Linting or not Listing
09:47 Lint Rule I: Missing Required Parameters
11:05 Lint Rule II: prefer_const
11:47 Lint Rule III: exhaustive_cases
12:27 Dart packages for Listing rules
** New Mentorship Program to boost your Flutter career **
https://gumroad.com/l/ydgtfV
BOOKS I RECOMMEND
https://geni.us/flutterbook
https://geni.us/clean-code
** YOUTUBE OPTIMIZATION PLUG-INS I USE **
TUBEBUDDY: https://www.tubebuddy.com/flutterexpl…
VIDIQ: https://vidiq.com?afmc=7jl
ALL THE YOUTUBE EQUIPMENT I USE:
Our current YouTube gear
💻 MacBook Pro: https://geni.us/mac-book
📹 Lumix FZ1000: https://geni.us/fz-1000
🎙 Samson Mic: https://geni.us/samson-mic
🎉 ACCESSORIES:
Satechi USB-C Adapter: https://geni.us/P9R0
SD Card for 4k Videos: https://geni.us/PTAc
Disclaimer Flutter Explained (Max & Mahtab) are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to www.amazon.com.
** Social Medias **
Website: https://flutter-explained.dev
Say hi to Max
Twitter: https://twitter.com/flutter_exp
GitHub Max: https://github.com/md-weber
LinkedIn: https://www.linkedin.com/in/max-weber…
Twitch: https://www.twitch.tv/maxflutter
Say hi to Mahtab
Twitter Mahtab: https://twitter.com/mahtab_dev
GitHub Mahtab: https://github.com/mt-tadayon
https://www.youtube.com/watch?v=TBgWVqafJW4
#flutter #dart #linting #analyzer