1663861860
In today's post we will learn about 10 Favorite System tools Libraries for Rust.
What is System tools?
System Tool is a variant of Win32/Winwebsec - a family of programs that claims to scan for malware and displays fake warnings of "malicious programs and viruses". They then inform the user that he or she needs to pay money to register the software to remove these non-existent threats.
Table of contents:
cd
that learns your habits.A fast alternative to cd
that learns your habits.
It remembers which directories you use most frequently, so you can "jump" to them in just a few keystrokes.
zoxide works on all major shells.
z foo # cd into highest ranked directory matching foo
z foo bar # cd into highest ranked directory matching foo and bar
z foo / # cd into a subdirectory starting with foo
z ~/foo # z also works like a regular cd command
z foo/ # cd into relative path
z .. # cd one level up
z - # cd into previous directory
zi foo # cd with interactive selection (using fzf)
z foo<SPACE><TAB> # show interactive completions (zoxide v0.8.0+, bash 4.4+/fish/zsh only)
Read more about the matching algorithm here.
zoxide runs on most major platforms. If your platform isn't listed below, please open an issue.
Linux
To install zoxide, run this command in your terminal:
curl -sS https://raw.githubusercontent.com/ajeetdsouza/zoxide/main/install.sh | bash
Or, you can use a package manager:
Distribution | Repository | Instructions |
---|---|---|
Any | crates.io | cargo install zoxide --locked |
Any | conda-forge | conda install -c conda-forge zoxide |
Any | Linuxbrew | brew install zoxide |
Alpine Linux 3.13+ | Alpine Linux Packages | apk add zoxide |
Arch Linux | Arch Linux Community | pacman -S zoxide |
CentOS 7+ | Copr | dnf copr enable atim/zoxide dnf install zoxide |
Debian 11+ | Debian Packages | apt install zoxide |
Devuan 4.0+ | Devuan Packages | apt install zoxide |
Fedora 32+ | Fedora Packages | dnf install zoxide |
Gentoo | GURU Overlay | eselect repository enable guru emerge --sync guru emerge app-shells/zoxide |
Manjaro | pacman -S zoxide | |
NixOS | nixpkgs | nix-env -iA nixpkgs.zoxide |
openSUSE Tumbleweed | openSUSE Factory | zypper install zoxide |
Parrot OS | apt install zoxide | |
Raspbian 11+ | Raspbian Packages | apt install zoxide |
Ubuntu 21.04+ | Ubuntu Packages | apt install zoxide |
Void Linux | Void Linux Packages | xbps-install -S zoxide |
macOS
To install zoxide, use a package manager:
Repository | Instructions |
---|---|
crates.io | cargo install zoxide --locked |
conda-forge | conda install -c conda-forge zoxide |
Homebrew | brew install zoxide |
MacPorts | port install zoxide |
Or, run this command in your terminal:
curl -sS https://raw.githubusercontent.com/ajeetdsouza/zoxide/main/install.sh | bash
Windows
To install zoxide, run this command in your command prompt:
curl.exe -A "MS" https://webinstall.dev/zoxide | powershell
Or, you can use a package manager:
Repository | Instructions |
---|---|
crates.io | cargo install zoxide --locked |
Chocolatey | choco install zoxide |
conda-forge | conda install -c conda-forge zoxide |
Scoop | scoop install zoxide |
BSD
To install zoxide, use a package manager:
Distribution | Repository | Instructions |
---|---|---|
Any | crates.io | cargo install zoxide --locked |
DragonFly BSD | DPorts | pkg install zoxide |
FreeBSD | FreshPorts | pkg install zoxide |
NetBSD | pkgsrc | pkgin install zoxide |
Android
To install zoxide, use a package manager:
Repository | Instructions |
---|---|
Termux | pkg install zoxide |
A modern replacement for 'ps' written by Rust.
ps
top
)Download from release page, and extract to the directory in PATH.
You can install from Nixpkgs.
nix-env --install procs
You can install from snapcraft.
sudo snap install procs
You can install from homebrew.
brew install procs
You can install from MacPorts.
sudo port install procs
You can install from the Alpine Linux repository.
The correct repository (see above link for the most up-to-date information) should be enabled before apk add
.
sudo apk add procs
You can install from the Arch Linux community repository.
sudo pacman -S procs
You can install with scoop.
scoop install procs
sudo dnf install procs
You can install with rpm.
sudo rpm -i https://github.com/dalance/procs/releases/download/v0.13.1/procs-0.13.1-1.x86_64.rpm
You can install with cargo.
cargo install procs
Terminal bandwidth utilization tool.
This is a CLI utility for displaying current network utilization by process, connection and remote IP/hostname.
bandwhich
sniffs a given network interface and records IP packet size, cross referencing it with the /proc
filesystem on linux, lsof
on macOS, or using WinApi on windows. It is responsive to the terminal window size, displaying less info if there is no room for it. It will also attempt to resolve ips to their host name in the background using reverse DNS on a best effort basis.
If you're on linux, you can download the generic binary from the releases.
pacman -S bandwhich
bandwhich
is available in nixpkgs
, and can be installed, for example, with nix-env
:
nix-env -iA nixpkgs.bandwhich
xbps-install -S bandwhich
bandwhich
is available in COPR, and can be installed via DNF:
sudo dnf copr enable atim/bandwhich -y && sudo dnf install bandwhich
brew install bandwhich
pkg install bandwhich
or
cd /usr/ports/net-mgmt/bandwhich && make install clean
bandwhich
can be installed using the Rust package manager, cargo. It might be in your distro repositories if you're on linux, or you can install it via rustup. You can find additional installation instructions here.
The minimum supported Rust version is 1.39.0.
cargo install bandwhich
On Linux, after installing with cargo:
Cargo installs bandwhich
to ~/.cargo/bin/bandwhich
but you need root priviliges to run bandwhich
. To fix that, there are a few options:
sudo setcap cap_sys_ptrace,cap_dac_read_search,cap_net_raw,cap_net_admin+ep $(which bandwhich)
sudo ~/.cargo/bin/bandwhich
instead of just bandwhich
sudo ln -s ~/.cargo/bin/bandwhich /usr/local/bin/
(or another path on root's PATH)sudo env "PATH=$PATH" bandwhich
sudo -E bandwhich
sudo cargo install bandwhich --root /usr/local/bin/
On Windows, after installing with cargo:
You might need to first install npcap for capturing packets on windows.
To install bandwhich
on OpenWRT, you'll need to compile a binary that would fit its processor architecture. This might mean you would have to cross compile if, for example, you're working on an x86_64
and the OpenWRT is installed on an arm7
. Here is an example of cross compiling in this situation:
uname -m
git clone https://github.com/imsnif/bandwhich
cross
using cargo install cross
bandwhich
package using cross build --target armv7-unknown-linux-musleabihf
target/armv7-unknown-linux-musleabihf/debug/bandwhich
to the router using scp
by running scp bandwhich root@192.168.1.1:~/
(here, 192.168.1.1 would be the IP address of your router)../bandwhich
Yet another cross-platform graphical process/system monitor.
As (yet another) process/system visualization and management application, bottom supports the typical features:
Graphical visualization widgets for:
Widgets for displaying info about:
A process widget for displaying, sorting, and searching info about processes, as well as support for:
Cross-platform support for Linux, macOS, and Windows, with more planned in the future.
Customizable behaviour that can be controlled with command-line flags or a config file, such as:
Some other nice stuff, like:
And more!
You can find more details in the documentation.
Installation via cargo is done by installing the bottom
crate:
# If required, update Rust on the stable channel
rustup update stable
cargo install bottom --locked
# Alternatively, --locked may be omitted if you wish to not used locked dependencies:
cargo install bottom
There is an official package that can be installed with pacman
:
sudo pacman -Syu bottom
A .deb
file is provided on each stable release and nightly builds for x86, aarch64, and armv7 (note stable ARM builds only started with 0.6.8). You could install this way doing something like:
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.6.8/bottom_0.6.8_amd64.deb
sudo dpkg -i bottom_0.6.8_amd64.deb
bottom is available as a snap:
sudo snap install bottom
# To allow the program to run as intended
sudo snap connect bottom:mount-observe
sudo snap connect bottom:hardware-observe
sudo snap connect bottom:system-observe
sudo snap connect bottom:process-control
Available in COPR:
sudo dnf copr enable atim/bottom -y
sudo dnf install bottom
Available in GURU and dm9pZCAq overlays:
sudo eselect repository enable guru
sudo emerge --sync guru
echo "sys-process/bottom" | sudo tee /etc/portage/package.accept_keywords/10-guru
sudo emerge sys-process/bottom::guru
or
sudo eselect repository enable dm9pZCAq
sudo emerge --sync dm9pZCAq
sudo emerge sys-process/bottom::dm9pZCAq
Small command-line JSON Log viewer.
fblog -a message -a "status > a" sample_nested.json.log
To filter log messages it is possible to use lua. If you are unsure which variables are available you can use --print-lua
to see the code generated by fblog.
fblog -f 'level ~= "info"' # will print all message where the level is not info
fblog -f 'process == "play"' # will print all message where the process is play
fblog -f 'string.find(fu, "bow.*") ~= nil' # will print all messages where fu starts with bow
fblog -f 'process == "play"' # will print all message where the process is play
fblog -f 'process == "rust" and fu == "bower"'
fblog --no-implicit-filter-return-statement -f 'if 3 > 2 then return true else return false end'
# not valid lua identifiers like log.level gets converted to log_level.
# Every character that is not _ or a letter will be converted to _
fblog -d -f 'log_level == "WARN"' sample_elastic.log
# nested fields are converted to lua records
fblog -d -f 'status.a == 100' sample_nested.json.log
# array fields are converted to lua tables (index starts with 1)
fblog -d -f 'status.d[2] == "a"' sample_nested.json.log
fblog
tries to detect the message, severity and timestamp of a log entry. This behavior can be customized. See --help
for more information.
You can customize fblog messages: Format output:
fblog -p --main-line-format "{{#if short_message}}{{ red short_message }}{{/if}}" sample.json.log
The following sanitized variables are provided by fblog:
For the default formatting see --help
Nested values are registered as objects. So you can use nested.value
to access nested values.
handlebar helpers:
fblog
disables color output if the NO_COLOR
environment variable is present.
cargo install fblog
Available in package managers: AUR, brew
Lightweight process killer daemon to handle out-of-memory scenarios on Linux.
bustd
is a lightweight process killer daemon for out-of-memory scenarios for Linux!
bustd
seems to use less memory than some other lean daemons such as earlyoom
:
$ ps -F -C bustd
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
vrmiguel 353609 187407 5 151 8 2 01:20 pts/2 00:00:00 target/x86_64-unknown-linux-musl/release/bustd -V -n
$ ps -F -C earlyoom
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
vrmiguel 350497 9498 0 597 688 6 01:12 pts/1 00:00:00 ./earlyoom/
¹: RSS stands for resident set size and represents the portion of RAM occupied by a process.
²: Compared when bustd was in this commit and earlyoom in this one. bustd
compiled with musl libc and earlyoom with glibc through GCC 11.1. Different configurations would likely change these figures.
Much like earlyoom
and nohang
, bustd
uses adaptive sleep times during its memory polling. Unlike these two, however, bustd
does not read from /proc/meminfo
, instead opting for the sysinfo
syscall.
This approach has its up- and downsides. The amount of free RAM that sysinfo
reads does not account for cached memory, while MemAvailable
in /proc/meminfo
does.
The sysinfo
syscall is one order of magnitude faster, at least according to this kernel patch (granted, from 2015).
As bustd
can't solely rely on the free RAM readings of sysinfo
, we check for memory stress through Pressure Stall Information.
bustd
will try to lock all pages mapped into its address spaceMuch like earlyoom
, bustd
uses mlockall
to avoid being sent to swap, which allows the daemon to remain responsive even when the system memory is under heavy load and susceptible to thrashing.
The Linux kernel, since version 4.20 (and built with CONFIG_PSI=y
), presents canonical new pressure metrics for memory, CPU, and IO. In the words of Facebook Incubator:
PSI stats are like barometers that provide fair warning of impending resource
shortages, enabling you to take more proactive, granular, and nuanced steps
when resources start becoming scarce.
More specifically, bustd
checks for how long, in microseconds, processes have stalled in the last 10 seconds. By default, bustd
will kill a process when processes have stalled for 25 microseconds in the last ten seconds.
Available on the Arch User Repository
aur/bustd-git r81.caab293-3 (+1 0.93)
Process killer daemon for out-of-memory scenarios
aur/bustd-pacman-hooks 0.1.0-1 (+2 1.13)
Series of Pacman Hooks to Ensure 'bustd' is Always Running an Up-to-Date Binary
aur/bustd 0.1.0-1 (+2 1.13) (Desatualizado: 2022-02-25)
Process Killer Daemon for Out-of-Memory Scenarios, written in Rust
A command launcher for Linux, similar to gmrun.
Note: Apart from the occasional fix, this project is not actively developed anymore. rrun works fine and should run/compile for the time being on rust stable. Alternatives to rrun are gmrun and rofi. Feel free to fork, request ownership or commit pull requests.
rrun is a minimalistic command launcher in rust similar to gmrun. It started as a playground to learn Rust, but since i use it all day for months now, it's probably useful for others as well. It replaced gmrun and gnome-do on my laptop. rrun has few features, it can do bash completion and run commands and that's it. It will also append the commands being run to your bash history.
GTK3.10+
You have several options:
I have mapped the unused, needless CapsLock key to some other key and set up Gnome or whatever (i3wm in my case) to launch rrun on keypress.
My ~/.Xmodmap:
remove Lock = Caps_Lock
keysym Caps_Lock = XF86HomePage
Don't forget to run "xmodmap ~/.Xmodmap" after login.
The relevant parts of ~/.i3/config:
bindsym XF86HomePage exec rrun
for_window [title="rrun"] floating enable
exec --no-startup-id xmodmap ~/.Xmodmap
The build process needs pbuilder/cowbuilder installed in debian (apt-get install cowbuilder pbuilder). A Debian testing buid image can be created with:
sudo cowbuilder --create --distribution testing
Install eatmydata (on build machine and in the image) to speeding up dpkg (from https://wiki.debian.org/cowbuilder ):
On the build machine:
apt-get install eatmydata
Fly through your shell history. Great Scott!
McFly replaces your default ctrl-r
shell history search with an intelligent search engine that takes into account your working directory and the context of recently executed commands. McFly's suggestions are prioritized in real time with a small neural network.
TL;DR: an upgraded ctrl-r
where history results make sense for what you're working on right now.
ctrl-r
to bring up a full-screen reverse history search prioritized with a small neural network.%
to match any number of characters when searching.The key feature of McFly is smart command prioritization powered by a small neural network that runs in real time. The goal is for the command you want to run to always be one of the top suggestions.
When suggesting a command, McFly takes into consideration:
Install the tap:
brew tap cantino/mcfly
Install mcfly
:
brew install cantino/mcfly/mcfly
Add the following to the end of your ~/.bashrc
, ~/.zshrc
, or ~/.config/fish/config.fish
file:
Bash:
eval "$(mcfly init bash)"
Zsh:
eval "$(mcfly init zsh)"
Fish:
mcfly init fish | source
Run . ~/.bashrc
/ . ~/.zshrc
/ source ~/.config/fish/config.fish
or restart your terminal emulator.
Remove mcfly
:
brew uninstall mcfly
Remove the tap:
brew untap cantino/mcfly
Remove the lines you added to ~/.bashrc
/ ~/.zshrc
/ ~/.config/fish/config.fish
.
Update the ports tree
sudo port selfupdate
Install mcfly
:
sudo port install mcfly
Add the following to the end of your ~/.bashrc
, ~/.zshrc
, or ~/.config/fish/config.fish
file, as appropriate:
Bash:
eval "$(mcfly init bash)"
Zsh:
eval "$(mcfly init zsh)"
Fish:
mcfly init fish | source
Run . ~/.bashrc
/ . ~/.zshrc
/ source ~/.config/fish/config.fish
or restart your terminal emulator.
Multi-threaded compression and decompression CLI tool.
This is currently a proof of concept CLI tool using the gzp
crate.
Supported formats:
brew tap sstadick/crabz
brew install crabz
curl -LO https://github.com/sstadick/crabz/releases/download/<latest>/crabz-linux-amd64.deb
sudo dpkg -i crabz-linux-amd64.deb
cargo install crabz
conda install -c conda-forge crabz
❯ crabz -h
Compress and decompress files
USAGE:
crabz [FLAGS] [OPTIONS] [FILE]
FLAGS:
-d, --decompress
Flag to switch to decompressing inputs. Note: this flag may change in future releases
-h, --help
Prints help information
-I, --in-place
Perform the compression / decompression in place.
**NOTE** this will remove the input file at completion.
-V, --version
Prints version information
OPTIONS:
-l, --compression-level <compression-level>
Compression level [default: 6]
-p, --compression-threads <compression-threads>
Number of compression threads to use, or if decompressing a format that allow for multi-threaded
decompression, the number to use. Note that > 4 threads for decompression doesn't seem to help [default:
32]
-f, --format <format>
The format to use [default: gzip] [possible values: gzip, bgzf, mgzip,
zlib, deflate, snap]
-o, --output <output>
Output path to write to, empty or "-" to write to stdout
-P, --pin-at <pin-at>
Specify the physical core to pin threads at.
This can provide a significant performance improvement, but has the downside of possibly conflicting with
other pinned cores. If you are running multiple instances of `crabz` at once you can manually space out the
pinned cores.
# Example
- Instance 1 has `-p 4 -P 0` set indicating that it will use 4 cores pinned at 0, 1, 2, 3
- Instance 2 has `-p 4 -P 4` set indicating that it will use 4 cores pinned at 4, 5, 6, 7
ARGS:
<FILE>
Input file to read from, empty or "-" to read from stdin
A configurable filesystem watcher inspired by entr.
Configure execution of different commands using semantic yaml.
# .watch.yaml
# list here all the events and the commands that it should execute
# TIP: include '.watch.yaml' in your .git/info/exclude to ignore it.
- name: run my tests
run: make test
change: 'tests/**'
ignore: 'tests/integration/**'
- name: fast compile sass
run: compass compile src/static/some.scss
change: ['src/static/**', 'src/assets/*']
- name: Starwars
run: telnet towel.blinkenlights.nl
change: '.watch.yaml'
- name: say hello
run: say hello
change: '.watch.yaml'
run_on_init: true
Create a lightweight watcher to run my tests everytime something in my project change. So I won't forget to keep my tests passing. Funzzy was made with Rust that is why it consumes almost nothing to run.
brew tap cristianoliveira/tap
brew update
brew install funzzy
curl -s https://raw.githubusercontent.com/cristianoliveira/funzzy/master/linux-install.sh | sh
cargo install funzzy
*Make sure you have $HOME/.cargo/bin
in your PATH export $PATH:$HOME/.cargo/bin
Make sure you have installed the follow dependecies:
Clone this repo and do:
make install
Initializing with boilerplate:
funzzy init
Change the yaml as you want. Then run:
funzzy watch
Run with some arbitrary command and stdin
find . -R '**.rs' | funzzy 'cargo build'
Run some arbitrary command in an interval of seconds
funzzy run 'cargo build' 10
Thank you for following this article.
Top 10 Rust crates you must know
1663861860
In today's post we will learn about 10 Favorite System tools Libraries for Rust.
What is System tools?
System Tool is a variant of Win32/Winwebsec - a family of programs that claims to scan for malware and displays fake warnings of "malicious programs and viruses". They then inform the user that he or she needs to pay money to register the software to remove these non-existent threats.
Table of contents:
cd
that learns your habits.A fast alternative to cd
that learns your habits.
It remembers which directories you use most frequently, so you can "jump" to them in just a few keystrokes.
zoxide works on all major shells.
z foo # cd into highest ranked directory matching foo
z foo bar # cd into highest ranked directory matching foo and bar
z foo / # cd into a subdirectory starting with foo
z ~/foo # z also works like a regular cd command
z foo/ # cd into relative path
z .. # cd one level up
z - # cd into previous directory
zi foo # cd with interactive selection (using fzf)
z foo<SPACE><TAB> # show interactive completions (zoxide v0.8.0+, bash 4.4+/fish/zsh only)
Read more about the matching algorithm here.
zoxide runs on most major platforms. If your platform isn't listed below, please open an issue.
Linux
To install zoxide, run this command in your terminal:
curl -sS https://raw.githubusercontent.com/ajeetdsouza/zoxide/main/install.sh | bash
Or, you can use a package manager:
Distribution | Repository | Instructions |
---|---|---|
Any | crates.io | cargo install zoxide --locked |
Any | conda-forge | conda install -c conda-forge zoxide |
Any | Linuxbrew | brew install zoxide |
Alpine Linux 3.13+ | Alpine Linux Packages | apk add zoxide |
Arch Linux | Arch Linux Community | pacman -S zoxide |
CentOS 7+ | Copr | dnf copr enable atim/zoxide dnf install zoxide |
Debian 11+ | Debian Packages | apt install zoxide |
Devuan 4.0+ | Devuan Packages | apt install zoxide |
Fedora 32+ | Fedora Packages | dnf install zoxide |
Gentoo | GURU Overlay | eselect repository enable guru emerge --sync guru emerge app-shells/zoxide |
Manjaro | pacman -S zoxide | |
NixOS | nixpkgs | nix-env -iA nixpkgs.zoxide |
openSUSE Tumbleweed | openSUSE Factory | zypper install zoxide |
Parrot OS | apt install zoxide | |
Raspbian 11+ | Raspbian Packages | apt install zoxide |
Ubuntu 21.04+ | Ubuntu Packages | apt install zoxide |
Void Linux | Void Linux Packages | xbps-install -S zoxide |
macOS
To install zoxide, use a package manager:
Repository | Instructions |
---|---|
crates.io | cargo install zoxide --locked |
conda-forge | conda install -c conda-forge zoxide |
Homebrew | brew install zoxide |
MacPorts | port install zoxide |
Or, run this command in your terminal:
curl -sS https://raw.githubusercontent.com/ajeetdsouza/zoxide/main/install.sh | bash
Windows
To install zoxide, run this command in your command prompt:
curl.exe -A "MS" https://webinstall.dev/zoxide | powershell
Or, you can use a package manager:
Repository | Instructions |
---|---|
crates.io | cargo install zoxide --locked |
Chocolatey | choco install zoxide |
conda-forge | conda install -c conda-forge zoxide |
Scoop | scoop install zoxide |
BSD
To install zoxide, use a package manager:
Distribution | Repository | Instructions |
---|---|---|
Any | crates.io | cargo install zoxide --locked |
DragonFly BSD | DPorts | pkg install zoxide |
FreeBSD | FreshPorts | pkg install zoxide |
NetBSD | pkgsrc | pkgin install zoxide |
Android
To install zoxide, use a package manager:
Repository | Instructions |
---|---|
Termux | pkg install zoxide |
A modern replacement for 'ps' written by Rust.
ps
top
)Download from release page, and extract to the directory in PATH.
You can install from Nixpkgs.
nix-env --install procs
You can install from snapcraft.
sudo snap install procs
You can install from homebrew.
brew install procs
You can install from MacPorts.
sudo port install procs
You can install from the Alpine Linux repository.
The correct repository (see above link for the most up-to-date information) should be enabled before apk add
.
sudo apk add procs
You can install from the Arch Linux community repository.
sudo pacman -S procs
You can install with scoop.
scoop install procs
sudo dnf install procs
You can install with rpm.
sudo rpm -i https://github.com/dalance/procs/releases/download/v0.13.1/procs-0.13.1-1.x86_64.rpm
You can install with cargo.
cargo install procs
Terminal bandwidth utilization tool.
This is a CLI utility for displaying current network utilization by process, connection and remote IP/hostname.
bandwhich
sniffs a given network interface and records IP packet size, cross referencing it with the /proc
filesystem on linux, lsof
on macOS, or using WinApi on windows. It is responsive to the terminal window size, displaying less info if there is no room for it. It will also attempt to resolve ips to their host name in the background using reverse DNS on a best effort basis.
If you're on linux, you can download the generic binary from the releases.
pacman -S bandwhich
bandwhich
is available in nixpkgs
, and can be installed, for example, with nix-env
:
nix-env -iA nixpkgs.bandwhich
xbps-install -S bandwhich
bandwhich
is available in COPR, and can be installed via DNF:
sudo dnf copr enable atim/bandwhich -y && sudo dnf install bandwhich
brew install bandwhich
pkg install bandwhich
or
cd /usr/ports/net-mgmt/bandwhich && make install clean
bandwhich
can be installed using the Rust package manager, cargo. It might be in your distro repositories if you're on linux, or you can install it via rustup. You can find additional installation instructions here.
The minimum supported Rust version is 1.39.0.
cargo install bandwhich
On Linux, after installing with cargo:
Cargo installs bandwhich
to ~/.cargo/bin/bandwhich
but you need root priviliges to run bandwhich
. To fix that, there are a few options:
sudo setcap cap_sys_ptrace,cap_dac_read_search,cap_net_raw,cap_net_admin+ep $(which bandwhich)
sudo ~/.cargo/bin/bandwhich
instead of just bandwhich
sudo ln -s ~/.cargo/bin/bandwhich /usr/local/bin/
(or another path on root's PATH)sudo env "PATH=$PATH" bandwhich
sudo -E bandwhich
sudo cargo install bandwhich --root /usr/local/bin/
On Windows, after installing with cargo:
You might need to first install npcap for capturing packets on windows.
To install bandwhich
on OpenWRT, you'll need to compile a binary that would fit its processor architecture. This might mean you would have to cross compile if, for example, you're working on an x86_64
and the OpenWRT is installed on an arm7
. Here is an example of cross compiling in this situation:
uname -m
git clone https://github.com/imsnif/bandwhich
cross
using cargo install cross
bandwhich
package using cross build --target armv7-unknown-linux-musleabihf
target/armv7-unknown-linux-musleabihf/debug/bandwhich
to the router using scp
by running scp bandwhich root@192.168.1.1:~/
(here, 192.168.1.1 would be the IP address of your router)../bandwhich
Yet another cross-platform graphical process/system monitor.
As (yet another) process/system visualization and management application, bottom supports the typical features:
Graphical visualization widgets for:
Widgets for displaying info about:
A process widget for displaying, sorting, and searching info about processes, as well as support for:
Cross-platform support for Linux, macOS, and Windows, with more planned in the future.
Customizable behaviour that can be controlled with command-line flags or a config file, such as:
Some other nice stuff, like:
And more!
You can find more details in the documentation.
Installation via cargo is done by installing the bottom
crate:
# If required, update Rust on the stable channel
rustup update stable
cargo install bottom --locked
# Alternatively, --locked may be omitted if you wish to not used locked dependencies:
cargo install bottom
There is an official package that can be installed with pacman
:
sudo pacman -Syu bottom
A .deb
file is provided on each stable release and nightly builds for x86, aarch64, and armv7 (note stable ARM builds only started with 0.6.8). You could install this way doing something like:
curl -LO https://github.com/ClementTsang/bottom/releases/download/0.6.8/bottom_0.6.8_amd64.deb
sudo dpkg -i bottom_0.6.8_amd64.deb
bottom is available as a snap:
sudo snap install bottom
# To allow the program to run as intended
sudo snap connect bottom:mount-observe
sudo snap connect bottom:hardware-observe
sudo snap connect bottom:system-observe
sudo snap connect bottom:process-control
Available in COPR:
sudo dnf copr enable atim/bottom -y
sudo dnf install bottom
Available in GURU and dm9pZCAq overlays:
sudo eselect repository enable guru
sudo emerge --sync guru
echo "sys-process/bottom" | sudo tee /etc/portage/package.accept_keywords/10-guru
sudo emerge sys-process/bottom::guru
or
sudo eselect repository enable dm9pZCAq
sudo emerge --sync dm9pZCAq
sudo emerge sys-process/bottom::dm9pZCAq
Small command-line JSON Log viewer.
fblog -a message -a "status > a" sample_nested.json.log
To filter log messages it is possible to use lua. If you are unsure which variables are available you can use --print-lua
to see the code generated by fblog.
fblog -f 'level ~= "info"' # will print all message where the level is not info
fblog -f 'process == "play"' # will print all message where the process is play
fblog -f 'string.find(fu, "bow.*") ~= nil' # will print all messages where fu starts with bow
fblog -f 'process == "play"' # will print all message where the process is play
fblog -f 'process == "rust" and fu == "bower"'
fblog --no-implicit-filter-return-statement -f 'if 3 > 2 then return true else return false end'
# not valid lua identifiers like log.level gets converted to log_level.
# Every character that is not _ or a letter will be converted to _
fblog -d -f 'log_level == "WARN"' sample_elastic.log
# nested fields are converted to lua records
fblog -d -f 'status.a == 100' sample_nested.json.log
# array fields are converted to lua tables (index starts with 1)
fblog -d -f 'status.d[2] == "a"' sample_nested.json.log
fblog
tries to detect the message, severity and timestamp of a log entry. This behavior can be customized. See --help
for more information.
You can customize fblog messages: Format output:
fblog -p --main-line-format "{{#if short_message}}{{ red short_message }}{{/if}}" sample.json.log
The following sanitized variables are provided by fblog:
For the default formatting see --help
Nested values are registered as objects. So you can use nested.value
to access nested values.
handlebar helpers:
fblog
disables color output if the NO_COLOR
environment variable is present.
cargo install fblog
Available in package managers: AUR, brew
Lightweight process killer daemon to handle out-of-memory scenarios on Linux.
bustd
is a lightweight process killer daemon for out-of-memory scenarios for Linux!
bustd
seems to use less memory than some other lean daemons such as earlyoom
:
$ ps -F -C bustd
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
vrmiguel 353609 187407 5 151 8 2 01:20 pts/2 00:00:00 target/x86_64-unknown-linux-musl/release/bustd -V -n
$ ps -F -C earlyoom
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
vrmiguel 350497 9498 0 597 688 6 01:12 pts/1 00:00:00 ./earlyoom/
¹: RSS stands for resident set size and represents the portion of RAM occupied by a process.
²: Compared when bustd was in this commit and earlyoom in this one. bustd
compiled with musl libc and earlyoom with glibc through GCC 11.1. Different configurations would likely change these figures.
Much like earlyoom
and nohang
, bustd
uses adaptive sleep times during its memory polling. Unlike these two, however, bustd
does not read from /proc/meminfo
, instead opting for the sysinfo
syscall.
This approach has its up- and downsides. The amount of free RAM that sysinfo
reads does not account for cached memory, while MemAvailable
in /proc/meminfo
does.
The sysinfo
syscall is one order of magnitude faster, at least according to this kernel patch (granted, from 2015).
As bustd
can't solely rely on the free RAM readings of sysinfo
, we check for memory stress through Pressure Stall Information.
bustd
will try to lock all pages mapped into its address spaceMuch like earlyoom
, bustd
uses mlockall
to avoid being sent to swap, which allows the daemon to remain responsive even when the system memory is under heavy load and susceptible to thrashing.
The Linux kernel, since version 4.20 (and built with CONFIG_PSI=y
), presents canonical new pressure metrics for memory, CPU, and IO. In the words of Facebook Incubator:
PSI stats are like barometers that provide fair warning of impending resource
shortages, enabling you to take more proactive, granular, and nuanced steps
when resources start becoming scarce.
More specifically, bustd
checks for how long, in microseconds, processes have stalled in the last 10 seconds. By default, bustd
will kill a process when processes have stalled for 25 microseconds in the last ten seconds.
Available on the Arch User Repository
aur/bustd-git r81.caab293-3 (+1 0.93)
Process killer daemon for out-of-memory scenarios
aur/bustd-pacman-hooks 0.1.0-1 (+2 1.13)
Series of Pacman Hooks to Ensure 'bustd' is Always Running an Up-to-Date Binary
aur/bustd 0.1.0-1 (+2 1.13) (Desatualizado: 2022-02-25)
Process Killer Daemon for Out-of-Memory Scenarios, written in Rust
A command launcher for Linux, similar to gmrun.
Note: Apart from the occasional fix, this project is not actively developed anymore. rrun works fine and should run/compile for the time being on rust stable. Alternatives to rrun are gmrun and rofi. Feel free to fork, request ownership or commit pull requests.
rrun is a minimalistic command launcher in rust similar to gmrun. It started as a playground to learn Rust, but since i use it all day for months now, it's probably useful for others as well. It replaced gmrun and gnome-do on my laptop. rrun has few features, it can do bash completion and run commands and that's it. It will also append the commands being run to your bash history.
GTK3.10+
You have several options:
I have mapped the unused, needless CapsLock key to some other key and set up Gnome or whatever (i3wm in my case) to launch rrun on keypress.
My ~/.Xmodmap:
remove Lock = Caps_Lock
keysym Caps_Lock = XF86HomePage
Don't forget to run "xmodmap ~/.Xmodmap" after login.
The relevant parts of ~/.i3/config:
bindsym XF86HomePage exec rrun
for_window [title="rrun"] floating enable
exec --no-startup-id xmodmap ~/.Xmodmap
The build process needs pbuilder/cowbuilder installed in debian (apt-get install cowbuilder pbuilder). A Debian testing buid image can be created with:
sudo cowbuilder --create --distribution testing
Install eatmydata (on build machine and in the image) to speeding up dpkg (from https://wiki.debian.org/cowbuilder ):
On the build machine:
apt-get install eatmydata
Fly through your shell history. Great Scott!
McFly replaces your default ctrl-r
shell history search with an intelligent search engine that takes into account your working directory and the context of recently executed commands. McFly's suggestions are prioritized in real time with a small neural network.
TL;DR: an upgraded ctrl-r
where history results make sense for what you're working on right now.
ctrl-r
to bring up a full-screen reverse history search prioritized with a small neural network.%
to match any number of characters when searching.The key feature of McFly is smart command prioritization powered by a small neural network that runs in real time. The goal is for the command you want to run to always be one of the top suggestions.
When suggesting a command, McFly takes into consideration:
Install the tap:
brew tap cantino/mcfly
Install mcfly
:
brew install cantino/mcfly/mcfly
Add the following to the end of your ~/.bashrc
, ~/.zshrc
, or ~/.config/fish/config.fish
file:
Bash:
eval "$(mcfly init bash)"
Zsh:
eval "$(mcfly init zsh)"
Fish:
mcfly init fish | source
Run . ~/.bashrc
/ . ~/.zshrc
/ source ~/.config/fish/config.fish
or restart your terminal emulator.
Remove mcfly
:
brew uninstall mcfly
Remove the tap:
brew untap cantino/mcfly
Remove the lines you added to ~/.bashrc
/ ~/.zshrc
/ ~/.config/fish/config.fish
.
Update the ports tree
sudo port selfupdate
Install mcfly
:
sudo port install mcfly
Add the following to the end of your ~/.bashrc
, ~/.zshrc
, or ~/.config/fish/config.fish
file, as appropriate:
Bash:
eval "$(mcfly init bash)"
Zsh:
eval "$(mcfly init zsh)"
Fish:
mcfly init fish | source
Run . ~/.bashrc
/ . ~/.zshrc
/ source ~/.config/fish/config.fish
or restart your terminal emulator.
Multi-threaded compression and decompression CLI tool.
This is currently a proof of concept CLI tool using the gzp
crate.
Supported formats:
brew tap sstadick/crabz
brew install crabz
curl -LO https://github.com/sstadick/crabz/releases/download/<latest>/crabz-linux-amd64.deb
sudo dpkg -i crabz-linux-amd64.deb
cargo install crabz
conda install -c conda-forge crabz
❯ crabz -h
Compress and decompress files
USAGE:
crabz [FLAGS] [OPTIONS] [FILE]
FLAGS:
-d, --decompress
Flag to switch to decompressing inputs. Note: this flag may change in future releases
-h, --help
Prints help information
-I, --in-place
Perform the compression / decompression in place.
**NOTE** this will remove the input file at completion.
-V, --version
Prints version information
OPTIONS:
-l, --compression-level <compression-level>
Compression level [default: 6]
-p, --compression-threads <compression-threads>
Number of compression threads to use, or if decompressing a format that allow for multi-threaded
decompression, the number to use. Note that > 4 threads for decompression doesn't seem to help [default:
32]
-f, --format <format>
The format to use [default: gzip] [possible values: gzip, bgzf, mgzip,
zlib, deflate, snap]
-o, --output <output>
Output path to write to, empty or "-" to write to stdout
-P, --pin-at <pin-at>
Specify the physical core to pin threads at.
This can provide a significant performance improvement, but has the downside of possibly conflicting with
other pinned cores. If you are running multiple instances of `crabz` at once you can manually space out the
pinned cores.
# Example
- Instance 1 has `-p 4 -P 0` set indicating that it will use 4 cores pinned at 0, 1, 2, 3
- Instance 2 has `-p 4 -P 4` set indicating that it will use 4 cores pinned at 4, 5, 6, 7
ARGS:
<FILE>
Input file to read from, empty or "-" to read from stdin
A configurable filesystem watcher inspired by entr.
Configure execution of different commands using semantic yaml.
# .watch.yaml
# list here all the events and the commands that it should execute
# TIP: include '.watch.yaml' in your .git/info/exclude to ignore it.
- name: run my tests
run: make test
change: 'tests/**'
ignore: 'tests/integration/**'
- name: fast compile sass
run: compass compile src/static/some.scss
change: ['src/static/**', 'src/assets/*']
- name: Starwars
run: telnet towel.blinkenlights.nl
change: '.watch.yaml'
- name: say hello
run: say hello
change: '.watch.yaml'
run_on_init: true
Create a lightweight watcher to run my tests everytime something in my project change. So I won't forget to keep my tests passing. Funzzy was made with Rust that is why it consumes almost nothing to run.
brew tap cristianoliveira/tap
brew update
brew install funzzy
curl -s https://raw.githubusercontent.com/cristianoliveira/funzzy/master/linux-install.sh | sh
cargo install funzzy
*Make sure you have $HOME/.cargo/bin
in your PATH export $PATH:$HOME/.cargo/bin
Make sure you have installed the follow dependecies:
Clone this repo and do:
make install
Initializing with boilerplate:
funzzy init
Change the yaml as you want. Then run:
funzzy watch
Run with some arbitrary command and stdin
find . -R '**.rs' | funzzy 'cargo build'
Run some arbitrary command in an interval of seconds
funzzy run 'cargo build' 10
Thank you for following this article.
Top 10 Rust crates you must know
1643176207
Serde
*Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.*
You may be looking for:
#[derive(Serialize, Deserialize)]
Click to show Cargo.toml. Run this code in the playground.
[dependencies]
# The core APIs, including the Serialize and Deserialize traits. Always
# required when using Serde. The "derive" feature is only required when
# using #[derive(Serialize, Deserialize)] to make Serde work with structs
# and enums defined in your crate.
serde = { version = "1.0", features = ["derive"] }
# Each data format lives in its own crate; the sample code below uses JSON
# but you may be using a different one.
serde_json = "1.0"
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize, Debug)]
struct Point {
x: i32,
y: i32,
}
fn main() {
let point = Point { x: 1, y: 2 };
// Convert the Point to a JSON string.
let serialized = serde_json::to_string(&point).unwrap();
// Prints serialized = {"x":1,"y":2}
println!("serialized = {}", serialized);
// Convert the JSON string back to a Point.
let deserialized: Point = serde_json::from_str(&serialized).unwrap();
// Prints deserialized = Point { x: 1, y: 2 }
println!("deserialized = {:?}", deserialized);
}
Serde is one of the most widely used Rust libraries so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo but they tend not to get as many eyes as any of the above and may get closed without a response after some time.
Download Details:
Author: serde-rs
Source Code: https://github.com/serde-rs/serde
License: View license
1620633584
In SSMS, we many of may noticed System Databases under the Database Folder. But how many of us knows its purpose?. In this article lets discuss about the System Databases in SQL Server.
Fig. 1 System Databases
There are five system databases, these databases are created while installing SQL Server.
#sql server #master system database #model system database #msdb system database #sql server system databases #ssms #system database #system databases in sql server #tempdb system database
1663856180
In today's post we will learn about 10 Favorite Rust Libraries for Security Tools.
What is Security Tools?
Security Tools are all information used to verify Client when implementing transactions, including but not limited to user name, password, registered telephone number, online code, OTP, and other types of information as prescribed for each trading mode.
Table of contents:
Rust bindings for libinjection.
libinjection
to dependencies
of Cargo.toml
:libinjection = "0.2"
extern crate libinjection;
use libinjection::{sqli, xss};
let (is_sqli, fingerprint) = sqli("' OR '1'='1' --").unwrap();
assert!(is_sqli);
assert_eq!("s&sos", fingerprint);
Fingerprints: Please refer to fingerprints.txt.
let is_xss = xss("<script type='text/javascript'>alert('xss');</script>").unwrap();
assert!(is_xss);
Stop half-done API specifications with a CLI tool that helps you avoid undefined user behaviour by validating your API specifications.
💣 What is Cherrybomb?
Cherrybomb is a CLI tool that helps you avoid undefined user behavior by validating your API specifications and running API security tests.
🔨 How does it work?
Cherrybomb reads your API spec file (Open API Specification) and validates it for best practices and the OAS specification, then it tests to verify that the API follows the OAS file and tests for common vulnerabilities.
The output is a detailed table with any issues found, guiding you to the exact problem and location to help you solve it quickly.
🐾 Get Started
Linux/MacOS:
curl https://cherrybomb.blstsecurity.com/install | /bin/bash
The script requires sudo permissions to move the cherrybomb bin into /usr/local/bin/.
(If you want to view the shell script(or even help to improving it - /scripts/install.sh)
cargo install cherrybomb
If you don't have cargo installed, you can install it from here
You can use our docker container that we host on our public repo in aws, though we require an API key for it, you can get it at our CI pipeline integration wizard(after you sign up)
docker run --mount type=bind,source=PATH_TO_OAS_DIR,destination=/home public.ecr.aws/t1d5k0l0/cherrybomb:latest cherrybomb oas -f home/OAS_NAME --api-key=API-KEY
You can also install Cherrybomb by cloning this repo, and building it using cargo(*only works with the nightly toolchain):
git clone https://github.com/blst-security/cherrybomb && cd cherrybomb
cargo build --release
sudo mv ./target/release/cherrybomb /usr/local/bin
After installing the CLI, verify it's working by running
cherrybomb --version
cherrybomb oas --file <PATH> --format <cli/txt/json>
A simple, fast, recursive content discovery tool written in Rust.
Ferox is short for Ferric Oxide. Ferric Oxide, simply put, is rust. The name rustbuster was taken, so I decided on a variation. 🤷
feroxbuster
is a tool designed to perform Forced Browsing.
Forced browsing is an attack where the aim is to enumerate and access resources that are not referenced by the web application, but are still accessible by an attacker.
feroxbuster
uses brute force combined with a wordlist to search for unlinked content in target directories. These resources may store sensitive information about web applications and operational systems, such as source code, credentials, internal network addressing, etc...
This attack is also known as Predictable Resource Location, File Enumeration, Directory Enumeration, and Resource Enumeration.
This section will cover the minimum amount of information to get up and running with feroxbuster. Please refer the the documentation, as it's much more comprehensive.
There are quite a few other installation methods, but these snippets should cover the majority of users.
If you're using kali, this is the preferred install method. Installing from the repos adds a ferox-config.toml in /etc/feroxbuster/
, adds command completion for bash, fish, and zsh, includes a man page entry, and installs feroxbuster
itself.
sudo apt update && sudo apt install -y feroxbuster
curl -sL https://raw.githubusercontent.com/epi052/feroxbuster/master/install-nix.sh | bash
Invoke-WebRequest https://github.com/epi052/feroxbuster/releases/latest/download/x86_64-windows-feroxbuster.exe.zip -OutFile feroxbuster.zip
Expand-Archive .\feroxbuster.zip
.\feroxbuster\feroxbuster.exe -V
Please refer the the documentation.
Here are a few brief examples to get you started. Please note, feroxbuster can do a lot more than what's listed below. As a result, there are many more examples, with demonstration gifs that highlight specific features, in the documentation.
Options that take multiple values are very flexible. Consider the following ways of specifying extensions:
./feroxbuster -u http://127.1 -x pdf -x js,html -x php txt json,docx
The command above adds .pdf, .js, .html, .php, .txt, .json, and .docx to each url
All of the methods above (multiple flags, space separated, comma separated, etc...) are valid and interchangeable. The same goes for urls, headers, status codes, queries, and size filters.
./feroxbuster -u http://127.1 -H Accept:application/json "Authorization: Bearer {token}"
./feroxbuster -u http://[::1] --no-recursion -vv
cat targets | ./feroxbuster --stdin --silent -s 200 301 302 --redirects -x js | fff -s 200 -o js-files
./feroxbuster -u http://127.1 --insecure --proxy http://127.0.0.1:8080
./feroxbuster -u http://127.1 --proxy socks5h://127.0.0.1:9050
./feroxbuster -u http://127.1 --query token=0123456789ABCDEF
A database protocol-aware proxy that is used to enforce access policies 👮
Inspektor is a protocol-aware proxy that is used to enforce access policies. It helps organizations in securing their data assets and obtaining data compliance.
Inspektor is designed to work with all databases such as Postgres, MySQL, and MongoDB.
The access policies are defined using OPA (open policy agent). Since policies are written in OPA, you can write policies such as granting access to the support engineer only if a support ticket is assigned.Go to the official documentation to learn more about OPA.
DELETE
,UPDATE
accidentally.OPA is used for a unified toolset and framework for policy across the cloud-native stack. Use OPA to release, analyze, and review policies without sacrificing availability or performance.
Here is the example policy, written using rego. This example policy allows users with ‘support’ roles to modify the shipped column of the claimed_items table and hides the email column of the customer table from the users with the ‘support’ role.
package inspektor.resource.acl
default allow = false
default protected_attributes = []
default allowed_attributes = []
role_permission := {
"support": [{"postgres-prod": {
# insert is not allowed for the support roles.
"insert": {"allowed": false, "allowed_attributes": {}},
# shipped column of claimed_items only allowed to update
"update": {"allowed": true, "allowed_attributes": {"prod.public.claimed_items.shipped"}},
# copy is not allowed
"copy": {"allowed": false, "allowed_attributes": {}, "protected_attributes":{}},
# support role can view every columns of the database except email column of customers table.
"view": {"allowed": true, "protected_attributes": {"prod.public.customers.email"}}, }}],
}
# retrive all the resources that can be accessible by the
# incoming groups. eg: support, admin, dev
resources[resource] {
resource = role_permission[input.groups[_]][_]
}
# retrive all the permissions for the given datasource and
# action. eg: view, update
permission = resources[_][input.datasource][input.action]
# this permission is allowed.
allow {
permission. allowed
}
# what are the attributes that are allowed to
# modify
allowed_attributes = intersection(attributes) {
attributes := {attribute | attribute := permission.allowed_attributes}
}
# attributes that needs to be hidden
# to the user.
protected_attributes = intersection(attributes) {
attributes := {attributes | attributes := permission.protected_attributes}
}
Inspektor comprises 2 main components.
The control plane acts as a management service that dynamically configures your data plane in order to enforce policies.
It is like a control center where an admin can configure and access all the roles of a particular employee or a user.
The data plane is deployed along with your data service. Dataplane enforces the access policies on all the queries that are coming to your database by intercepting the network traffic.
A scriptable network authentication cracker.
authoscope is a scriptable network authentication cracker. While the space for common service bruteforce is already very well saturated, you may still end up writing your own python scripts when testing credentials for web applications.
The scope of authoscope is specifically cracking custom services. This is done by writing scripts that are loaded into a lua runtime. Those scripts represent a single service and provide a verify(user, password)
function that returns either true or false. Concurrency, progress indication and reporting is magically provided by the authoscope runtime.
If you are on an Arch Linux based system, use
pacman -S authoscope
If you are on Mac OSX, use
brew install authoscope
To build from source, make sure you have rust and libssl-dev
installed and run
cargo install
Verify your setup is complete with
authoscope --help
Install essential build tools
sudo apt-get update && sudo apt-get dist-upgrade
sudo apt-get install build-essential libssl-dev pkg-config
Install rust
curl -sf -L https://static.rust-lang.org/rustup.sh | sh
source $HOME/.cargo/env
Install authoscope
cd /path/to/authoscope
cargo install
A simple script could look like this:
descr = "example.com"
function verify(user, password)
session = http_mksession()
-- get csrf token
req = http_request(session, 'GET', 'https://example.com/login', {})
resp = http_send(req)
if last_err() then return end
-- parse token from html
html = resp['text']
csrf = html_select(html, 'input[name="csrf"]')
token = csrf["attrs"]["value"]
-- send login
req = http_request(session, 'POST', 'https://example.com/login', {
form={
user=user,
password=password,
csrf=token
}
})
resp = http_send(req)
if last_err() then return end
-- search response for successful login
html = resp['text']
return html:find('Login successful') != nil
end
Please see the reference and examples for all available functions. Keep in mind that you can use print(x)
and authoscope oneshot
to debug your script.
A TCP connection hijacker, rust rewrite of shijack.
tcp connection hijacker, rust rewrite of shijack from 2001.
This was written for TAMUctf 2018, brick house 100. The target was a telnet server that was protected by 2FA. Since the challenge wasn't authenticated, there have been multiple solutions for this. Our solution (cyclopropenylidene) was waiting until the authentication was done, then inject a tcp packet into the telnet connection:
# if you don't know one of the ports use 0 to match any port
echo 'cat ~/.ctf_flag' | sudo rshijack tap0 172.16.13.20:37386 172.16.13.19:23
After some attempts this command was accepted and executed by the telnet server, resulting in a tcp packet containing the flag.
The way this works is by sniffing for a packet of a specific connection, then read the SEQ and ACK fields. Using that information, it's possible to send a packet on a raw socket that is accepted by the remote server as valid.
The other tools in that screenshot are sniffglue and arpspoof.
Installation
pacman -S rshijack
If needed, rshijack can be pulled as a docker image. The image is currently about 10.2MB.
docker run -it --init --rm --net=host kpcyrd/rshijack eth0 172.16.13.20:37386 172.16.13.19:23
A semi-automatic OSINT framework and package manager.
sn0int (pronounced /snoɪnt/
) is a semi-automatic OSINT framework and package manager. It's used by IT security professionals, bug bounty hunters, law enforcement agencies and in security awareness trainings to gather intelligence about a given target or about yourself. sn0int is enumerating attack surface by semi-automatically processing public information and mapping the results in a unified format for followup investigations.
Among other things, sn0int is currently able to:
sn0int is heavily inspired by recon-ng and maltego, but remains more flexible and is fully opensource. None of the investigations listed above are hardcoded in the source, instead they are provided by modules that are executed in a sandbox. You can easily extend sn0int by writing your own modules and share them with other users by publishing them to the sn0int registry. This allows you to ship updates for your modules on your own instead of pull-requesting them into the sn0int codebase.
For questions and support join us on IRC: irc.hackint.org:6697/#sn0int
Archlinux
pacman -S sn0int
Mac OSX
brew install sn0int
Debian/Ubuntu/Kali
There are prebuilt packages signed by a debian maintainer. We can import the key for this repository out of the debian keyring.
apt install debian-keyring
gpg -a --export --keyring /usr/share/keyrings/debian-maintainers.gpg git@rxv.cc | apt-key add -
apt-key adv --keyserver keyserver.ubuntu.com --refresh-keys git@rxv.cc
echo deb http://apt.vulns.sexy stable main > /etc/apt/sources.list.d/apt-vulns-sexy.list
apt update
apt install sn0int
Docker
docker run --rm --init -it -v "$PWD/.cache:/cache" -v "$PWD/.data:/data" kpcyrd/sn0int
Alpine
apk add sn0int
OpenBSD
pkg_add sn0int
Gentoo
layman -a pentoo
emerge --ask net-analyzer/sn0int
NixOS
nix-env -i sn0int
For everything else please have a look at the detailed list.
A secure multithreaded packet sniffer.
sniffglue is a network sniffer written in rust. Network packets are parsed concurrently using a thread pool to utilize all cpu cores. Project goals are that you can run sniffglue securely on untrusted networks and that it must not crash when processing packets. The output should be as useful as possible by default.
# sniff with default filters (dhcp, dns, tls, http)
sniffglue enp0s25
# increase the filter sensitivity (arp)
sniffglue -v enp0s25
# increase the filter sensitivity (cjdns, ssdp, dropbox, packets with valid utf8)
sniffglue -vv enp0s25
# almost everything
sniffglue -vvv enp0s25
# everything
sniffglue -vvvv enp0s25
pacman -S sniffglue
brew install sniffglue
First included in debian bullseye, ubuntu 21.04.
apt install sniffglue
apk add sniffglue
layman -a pentoo
emerge --ask net-analyzer/sniffglue
nix-env -i sniffglue
guix install sniffglue
To build from source make sure you have libpcap and libseccomp installed. On debian based systems:
# install the dependencies
sudo apt install libpcap-dev libseccomp-dev
# install rust with rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
# install sniffglue and test it
cargo install sniffglue
sniffglue --help
Or you can build a Debian package via cargo-deb:
cargo deb
A Comprehensive Web Fuzzer and Content Discovery Tool.
install_rustbuster() {
echo "Installing latest version of Rustbuster"
latest_version=`curl -s https://github.com/phra/rustbuster/releases | grep "rustbuster-v" | head -n1 | cut -d'/' -f6`
echo "Latest release: $latest_version"
mkdir -p /opt/rustbuster
wget -qP /opt/rustbuster https://github.com/phra/rustbuster/releases/download/$latest_version/rustbuster-$latest_version-x86_64-unknown-linux-gnu
ln -fs /opt/rustbuster/rustbuster-$latest_version-x86_64-unknown-linux-gnu /opt/rustbuster/rustbuster
chmod +x /opt/rustbuster/rustbuster
echo "Done! Try running"
echo "/opt/rustbuster/rustbuster -h"
}
install_rustbuster
rustbuster 2.1.0
DirBuster for rust
USAGE:
rustbuster [SUBCOMMAND]
FLAGS:
-h, --help Prints help information
-V, --version Prints version information
SUBCOMMANDS:
dir Directories and files enumeration mode
dns A/AAAA entries enumeration mode
fuzz Custom fuzzing enumeration mode
help Prints this message or the help of the given subcommand(s)
vhost Virtual hosts enumeration mode
tilde IIS 8.3 shortname enumeration mode
EXAMPLES:
1. Dir mode:
rustbuster dir -u http://localhost:3000/ -w examples/wordlist -e php
2. Dns mode:
rustbuster dns -d google.com -w examples/wordlist
3. Vhost mode:
rustbuster vhost -u http://localhost:3000/ -w examples/wordlist -d test.local -x "Hello"
4. Fuzz mode:
rustbuster fuzz -u http://localhost:3000/login \
-X POST \
-H "Content-Type: application/json" \
-b '{"user":"FUZZ","password":"FUZZ","csrf":"CSRFCSRF"}' \
-w examples/wordlist \
-w /usr/share/seclists/Passwords/Common-Credentials/10-million-password-list-top-10000.txt \
-s 200 \
--csrf-url "http://localhost:3000/csrf" \
--csrf-regex '\{"csrf":"(\w+)"\}'
5. Tilde mode:
rustbuster tilde -u http://localhost:3000/ -e aspx -X OPTIONS
dir
usagerustbuster-dir
Directories and files enumeration mode
USAGE:
rustbuster dir [FLAGS] [OPTIONS] --url <url> --wordlist <wordlist>...
FLAGS:
-f, --append-slash Tries to also append / to the base request
-K, --exit-on-error Exits on connection errors
-h, --help Prints help information
-k, --ignore-certificate Disables TLS certificate validation
--no-banner Skips initial banner
--no-progress-bar Disables the progress bar
-V, --version Prints version information
-v, --verbose Sets the level of verbosity
OPTIONS:
-e, --extensions <extensions> Sets the extensions [default: ]
-b, --http-body <http-body> Uses the specified HTTP method [default: ]
-H, --http-header <http-header>... Appends the specified HTTP header
-X, --http-method <http-method> Uses the specified HTTP method [default: GET]
-S, --ignore-status-codes <ignore-status-codes> Sets the list of status codes to ignore [default: 404]
-s, --include-status-codes <include-status-codes> Sets the list of status codes to include [default: ]
-o, --output <output> Saves the results in the specified file [default: ]
-t, --threads <threads> Sets the amount of concurrent requests [default: 10]
-u, --url <url> Sets the target URL
-a, --user-agent <user-agent> Uses the specified User-Agent [default: rustbuster]
-w, --wordlist <wordlist>... Sets the wordlist
EXAMPLE:
rustbuster dir -u http://localhost:3000/ -w examples/wordlist -e php
dns
usagerustbuster-dns
A/AAAA entries enumeration mode
USAGE:
rustbuster dns [FLAGS] [OPTIONS] --domain <domain> --wordlist <wordlist>...
FLAGS:
-K, --exit-on-error Exits on connection errors
-h, --help Prints help information
--no-banner Skips initial banner
--no-progress-bar Disables the progress bar
-V, --version Prints version information
-v, --verbose Sets the level of verbosity
OPTIONS:
-d, --domain <domain> Uses the specified domain
-o, --output <output> Saves the results in the specified file [default: ]
-t, --threads <threads> Sets the amount of concurrent requests [default: 10]
-w, --wordlist <wordlist>... Sets the wordlist
EXAMPLE:
rustbuster dns -d google.com -w examples/wordlist
vhost
usagerustbuster-vhost
Virtual hosts enumeration mode
USAGE:
rustbuster vhost [FLAGS] [OPTIONS] --domain <domain> --ignore-string <ignore-string>... --url <url> --wordlist <wordlist>...
FLAGS:
-K, --exit-on-error Exits on connection errors
-h, --help Prints help information
-k, --ignore-certificate Disables TLS certificate validation
--no-banner Skips initial banner
--no-progress-bar Disables the progress bar
-V, --version Prints version information
-v, --verbose Sets the level of verbosity
OPTIONS:
-d, --domain <domain> Uses the specified domain to bruteforce
-b, --http-body <http-body> Uses the specified HTTP body [default: ]
-H, --http-header <http-header>... Appends the specified HTTP header
-X, --http-method <http-method> Uses the specified HTTP method [default: GET]
-S, --ignore-status-codes <ignore-status-codes> Sets the list of status codes to ignore [default: 404]
-x, --ignore-string <ignore-string>... Ignores results with specified string in the HTTP body
-s, --include-status-codes <include-status-codes> Sets the list of status codes to include [default: ]
-o, --output <output> Saves the results in the specified file [default: ]
-t, --threads <threads> Sets the amount of concurrent requests [default: 10]
-u, --url <url> Sets the target URL
-a, --user-agent <user-agent> Uses the specified User-Agent [default: rustbuster]
-w, --wordlist <wordlist>... Sets the wordlist
EXAMPLE:
rustbuster vhost -u http://localhost:3000/ -w examples/wordlist -d test.local -x "Hello"
A password manager, filesystem compatible with pass.
The root crate ripasso
is a library for accessing and decrypting passwords stored in pass format (GPG-encrypted files), with a file-watcher event emitter.
Multiple UI's in different stages of development are available in subcrates.
To build all UI's:
cargo build --all
PR's are very welcome!
If you want to talk to the developers, please join our matrix room here.
This is a reimplementation of https://github.com/cortex/gopass in Rust. I started it mainly because https://github.com/go-qml/qml is unmaintained. Also, using a safe language for your passwords seems like a good idea.
TUI interface based on cursive Supports password age display and password editing. I use this as my daily password-manager.
cargo build -p ripasso-cursive
This is mostly working, but needs updates.
cargo build -p ripasso-qt
For it to run, you need to be in the qml directory.
cd qml
cargo run
Build
cargo build -p ripasso-gtk
Thank you for following this article.
Fuzzing Rust crate library using honggfuzz-rs fuzzer (ical-rs) - Rust Security
1658878980
(This suite of tools is 100% compatible with branches. If you think this is confusing, you can suggest a new name here.)
git-branchless
is a suite of tools which enhances Git in several ways:
It makes Git easier to use, both for novices and for power users. Examples:
git undo
: a general-purpose undo command. See the blog post git undo: We can do better.git restack
: to repair broken commit graphs.It adds more flexibility for power users. Examples:
git sync
: to rebase all local commit stacks and branches without having to check them out first.git move
: The ability to move subtrees rather than "sticks" while cleaning up old branches, not touching the working copy, etc.git next/prev
: to quickly jump between commits and branches in a commit stack.git co -i/--interactive
: to interactively select a commit to check out.It provides faster operations for large repositories and monorepos, particularly at large tech companies. Examples:
git status
or invalidate build artifacts).git-branchless
provides the fastest implementation of rebase among Git tools and UIs, for the above reasons.See also the User guide and Design goals.
Undo almost anything:
Why not git reflog
?
git reflog
is a tool to view the previous position of a single reference (like HEAD
), which can be used to undo operations. But since it only tracks the position of a single reference, complicated operations like rebases can be tedious to reverse-engineer. git undo
operates at a higher level of abstraction: the entire state of your repository.
git reflog
also fundamentally can't be used to undo some rare operations, such as certain branch creations, updates, and deletions. See the architecture document for more details.
What doesn't git undo
handle?
git undo
relies on features in recent versions of Git to work properly. See the compatibility chart.
Currently, git undo
can't undo the following. You can find the design document to handle some of these cases in issue #10.
git reset HEAD^
.git uncommit
command instead. See issue #3.git status
shows a message like path/to/file (both modified)
, so that you can resolve that specific conflict differently. This is tracked by issue #10 above.Fundamentally, git undo
is not intended to handle changes to untracked files.
Comparison to other Git undo tools
gitjk
: Requires a shell alias. Only undoes most recent command. Only handles some Git operations (e.g. doesn't handle rebases).git-extras/git-undo
: Only undoes commits at current HEAD
.git-annex undo
: Only undoes the most recent change to a given file or directory.thefuck
: Only undoes historical shell commands. Only handles some Git operations (e.g. doesn't handle rebases).Visualize your commit history with the smartlog (git sl
):
Why not `git log --graph`?
git log --graph
only shows commits which have branches attached with them. If you prefer to work without branches, then git log --graph
won't work for you.
To support users who rewrite their commit graph extensively, git sl
also points out commits which have been abandoned and need to be repaired (descendants of commits marked with rewritten as abcd1234
). They can be automatically fixed up with git restack
, or manually handled.
Edit your commit graph without fear:
Why not `git rebase -i`?
Interactive rebasing with git rebase -i
is fully supported, but it has a couple of shortcomings:
git rebase -i
can only repair linear series of commits, not trees. If you modify a commit with multiple children, then you have to be sure to rebase all of the other children commits appropriately.When you use git rebase -i
with git-branchless
, you will be prompted to repair your commit graph if you abandon any commits.
See https://github.com/arxanas/git-branchless/wiki/Installation.
Short version: run cargo install --locked git-branchless
, then run git branchless init
in your repository.
git-branchless
is currently in alpha. Be prepared for breaking changes, as some of the workflows and architecture may change in the future. It's believed that there are no major bugs, but it has not yet been comprehensively battle-tested. You can see the known issues in the issue tracker.
git-branchless
follows semantic versioning. New 0.x.y versions, and new major versions after reaching 1.0.0, may change the on-disk format in a backward-incompatible way.
To be notified about new versions, select Watch » Custom » Releases in Github's notifications menu at the top of the page. Or use GitPunch to deliver notifications by email.
There's a lot of promising tooling developing in this space. See Related tools for more information.
Thanks for your interest in contributing! If you'd like, I'm happy to set up a call to help you onboard.
For code contributions, check out the Runbook to understand how to set up a development workflow, and the Coding guidelines. You may also want to read the Architecture documentation.
For contributing documentation, see the Wiki style guide.
Contributors should abide by the Code of Conduct.
Download details:
Author: arxanas
Source code: https://github.com/arxanas/git-branchless
License: GPL-2.0 license
#rust #rustlang #git