1655954820
kubectl completion for fish shell
$ mkdir -p ~/.config/fish/completions
$ cd ~/.config/fish
$ git clone https://github.com/evanlucas/fish-kubectl-completions
$ ln -s ../fish-kubectl-completions/completions/kubectl.fish completions/
fisher install evanlucas/fish-kubectl-completions
This was tested using go 1.15.7 on macOS 11.1 "Big Sur".
$ make build
FISH_KUBECTL_COMPLETION_TIMEOUT
This is used to pass the --request-timeout
flag to the kubectl
command. It defaults to 5s
.
Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.
FISH_KUBECTL_COMPLETION_COMPLETE_CRDS
This can be used to prevent completing CRDs. Some users may have limited access to resources. It defaults to 1
. To disable, set to anything other than 1
.
Author: Evanlucas
Source Code: https://github.com/evanlucas/fish-kubectl-completions
License: MIT license
1655694711
Beginner friendly first look at the Linux command line. We'll cover some fundamentals about what it is, how it works, man pages, basic commands, directory navigation, file manipulation, and more.
In this article we'll take a good look at the command line (also known as the CLI, console, terminal or shell).
The command line is one of the most useful and efficient tools we have as developers and as computer users in general. But using it can feel a bit overwhelming and complex when you're starting out.
In this article I'll try my best to simply explain the parts that make up the command line interface, and the basics of how it works, so you can start using it for your daily tasks.
Difference between console, command line (CLI), terminal and Shell
I think a good place to start is to know exactly what the command line is.
When referring to this, you may have heard the terms Terminal, console, command line, CLI, and shell. People often use these words interchangeably but the truth is they're actually different things.
Differentiating each isn't necesarilly crucial knwoledge to have, but it will help clarify things. So lets briefly explain each one.
The console is the physical device that allows you to interact with the computer.
In plain English, it's your computer screen, keyboard, and mouse. As a user, you interact with your computer through your console.
A terminal is a text input and output environment. It is a program that acts as a wrapper and allows us to enter commands that the computer processes.
In plain English again, it's the "window" in which you enter the actual commands your computer will process.
Keep in mind the terminal is a program, just like any other. And like any program, you can install it and uninstall it as you please. It's also possible to have many terminals installed in your computer and run whichever you want whenever you want.
All operating systems come with a default terminal installed, but there are many options out there to choose from, each with its own functionalities and features.
A shell is a program that acts as command-line interpreter. It processes commands and outputs the results. It interprets and processes the commands entered by the user.
Same as the terminal, the shell is a program that comes by default in all operating systems, but can also be installed and uninstalled by the user.
Different shells come with different syntax and characteristics as well. It's also possible to have many shells installed at your computer and run each one whenever you want.
In most Linux and Mac operating systems the default shell is Bash. While on Windows it's Powershell. Some other common examples of shells are Zsh and Fish.
Shells work also as programming languages, in the sense that with them we can build scripts to make our computer execute a certain task. Scripts are nothing more than a series of instructions (commands) that we can save on a file and later on execute whenever we want.
We'll take a look at scripts later on in this article. For now just keep in mind that the shell is the program your computer uses to "understand" and execute your commands, and that you can also use it to program tasks.
Also keep in mind that the terminal is the program in which the shell will run. But both programs are independent. That means, I can have any shell run on any terminal. There's no dependance between both programs in that sense.
The CLI is the interface in which we enter commands for the computer to process. In plain English once again, it's the space in which you enter the commands the computer will process.
This is practically the same as the terminal and in my opinion these terms can be used interchangeably.
One interesting thing to mention here is that most operating systems have two different types of interfaces:
Why should I even care about using the terminal?
We just mentioned that most operating systems come with a GUI. So if we can see things on the screen and click around to do whatever we want, you might wonder why you should learn this complicated terminal/cli/shell thing?
The first reason is that for many tasks, it's just more efficient. We'll see some examples in a second, but there are many tasks where a GUI would require many clicks around different windows. But on the CLI these tasks can be executed with a single command.
In this sense, being comfortable with the command line will help you save time and be able to execute your tasks quicker.
The second reason is that by using commands you can easily automate tasks. As previously mentioned, we can build scripts with our shell and later on execute those scripts whenever we want. This is incredibly useful when dealing with repetitive tasks that we don't want to do over and over again.
Just to give some examples, we could build a script that creates a new online repo for us, or that creates a certain infrastructure on a cloud provider for us, or that executes a simpler task like changing our screen wallpaper every hour.
Scripting is a great way to save up time with repetitive tasks.
The third reason is that sometimes the CLI will be the only way in which we'll be able to interact with a computer. Take, for example, the case when you would need to interact with a cloud platform server. In most of these cases, you won't have a GUI available, just a CLI to run commands in.
So being comfortable with the CLI will allow you to interact with computers on all ocassions.
The last reason is it looks cool and it's fun. You don't see movie hackers clicking around their computers, right? ;)
Different kinds of shells
Before diving into the actual commands you can run in your terminal, I think it's important to recognize the different types of shells out there and how to identify which shell you're currently running.
Different shells come with different syntax and different features, so to know exactly what command to enter, you first need to know what shell you're running.
For shells, there's a common standard called Posix.
Posix works for shells in a very similar way that ECMAScript works for JavaScript. It's a standard that dictates certain characteristics and features that all shells should comply with.
This standard was stablished in the 1980's and most current shells were developed according to that standard. That's why most shells share similar syntax and similar features.
To know what shell you're currently running, just open your terminal and enter echo $0
. This will print the current running program name, which in this case is the actual shell.
There's not A LOT of difference between most shells. Since most of them comply with the same standard, you'll find that most of them work similarly.
There are some slight differences you might want to know, though:
The fact that shells add more features makes them easier and friendlier to interact with, but slower to execute scripts and commands.
So a common practice is to use this "enhanced" shells like Bash or Zsh for general interaction, and a "stripped" shell like Ash or Dash to execute scripts.
When we get to scripting later on, we'll see how we can define what shell will execute a given script.
If you're interested in a more detailed comparison between these shells, here's a video that explains it really well:
If had to recommend a shell, I would recommend bash as it's the most standard and commonly-used one. This means you'll be able to translate your knowledge into most environments.
But again, truth is there's not A LOT of difference between most shells. So in any case you can try a few and see which one you like best. ;)
I just mentioned that Fish comes with built-in configuration such as autocompletion and syntax highlighting. This come built-in in Fish, but in Bash or Zsh you can configure these features, too.
The point is that shells are customizable. You can edit how the program works, what commands you have available, what information your prompt shows, and more.
We won't see customization options in detail here, but know that when you install a shell in your computer, certain files will be created on your system. Later on you can edit those files to customize your program.
Also, there are many plugins available online that allow you to customize your shell in an easier way. You just install them and get the features that plugin offers. Some examples are OhMyZsh and Starship.
These customization options are also true for Terminals.
So not only do you have many shell and terminal options to choose from β you also have many configuration options for each shell and terminal.
If you're starting out, all this information can feel a bit overwhelming. But just know that there are many options available, and each option can be customized too. That's it.
Most common and useful commands to use
Now that we have a foundation of how the CLI works, let's dive into the most useful commands you can start to use for your daily tasks.
Keep in mind that these examples will be based on my current configuration (Bash on a Linux OS). But most commands should apply to most configurations anyway.
echo Hello freeCodeCamp! // Output: Hello freeCodeCamp!
pwd // Output: /home/German
For example, here I'm on a React project directory I've been working on lately:
ls // Output:
node_modules package.json package-lock.json public README.md src
If you pass this command the flag or paremter -a
It will also show you hidden files or directories. Like .git
or .gitignore
files
ls -a // Output:
. .env .gitignore package.json public src
.. .git node_modules package-lock.json README.md
While on my home directory, I can enter cd Desktop
and it will take me to the Desktop Directory.
If I want to go up one directory, meaning go to the directory that contains the current directory, I can enter cd ..
If you enter cd
alone, it will take you straight to your home directory.
If I wanted to create a new directory called "Test" I would enter mkdir test
.
rmdir stands for Remove directory and it does just that. It needs the directory name parameter just as mkdir
: rmdir test
.
touch allows you to create an empty file in your current directory. As parameters it takes the file name, like touch test.txt
.
rm allows you to delete files, in the same way rmdir
allows you to remove directories.rm test.txt
cp allows you to copy files or directories. This command takes two parameters: the first one is the file or directory you want to copy, and the second one is the destination of your copy (where do you want to copy your file/directory to).
If I want to make a copy of my txt file in the same directory, I can enter the following:
cp test.txt testCopy.txt
See that the directory doesn't change, as for "destination" I enter the new name of the file.
If I wanted to copy the file into a diferent directory, but keep the same file name, I can enter this:
cp test.txt ./testFolder/
And if I wanted to copy to a different folder changing the field name, of course I can enter this:
cp test.txt ./testFolder/testCopy.txt
Again, this command takes two paremers, the file or directory we want to move and the destination.
mv test.txt ./testFolder/
We can change the name of the file too in the same command if we want to:
mv test.txt ./testFolder/testCopy.txt
head test.txt // Output:
this is the beginning of my test file
tail test.txt // Output:
this is the end of my test file
cd --help // output:
cd: cd [-L|[-P [-e]] [-@]] [dir]
Change the shell working directory.
Change the current directory to DIR. The default DIR is the value of the HOME shell variable.
The variable CDPATH defines the search path for the directory containing DIR. Alternative directory names in CDPATH are separated by a colon :
.
A null directory name is the same as the current directory if DIR begins with ...
.
man cp // output:
CP(1) User Commands CP(1)
NAME
cp - copy files and directories
SYNOPSIS
cp [OPTION]... [-T] SOURCE DEST
cp [OPTION]... SOURCE... DIRECTORY
cp [OPTION]... -t DIRECTORY SOURCE...
DESCRIPTION
Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY.
Mandatory arguments to long options are mandatory for short options
too.
-a, --archive
same as -dR --preserve=all
--attributes-only
don't copy the file data, just the attributes
...
You can even enter man bash
and that will return a huge manual about everything there's to know about this shell. ;)
You can also open a given file by passing it as parameter: code test.txt
.
Or open a new file by passing the new file name: code thisIsAJsFile.js
.
If you open your file and then can't exit your editor, first look at this meme:

And then type :q!
and hit enter.
The meme is funny because everyone struggles with CLI text editors at first, as most actions (like exiting the editor) are done with keyboard shortcuts. Using these editors is a whole other topic, so go look for tutorials if you're interested in learning more. ;)
ctrl+c allows you to exit the current process the terminal is running. For example, if you're creating a react app with npx create-react-app
and want to cancel the build at some point, just hit ctrl+c and it will stop.
Copying text from the terminal can be done with ctrl+shift+c and pasting can be done with ctrl+shift+v
clear will clear your terminal from all previous content.
exit will close your terminal and (this is not a command but it's cool too) ctrl+alt+t will open a new terminal for you.
By pressing up and down keys you can navigate through the previous commands you entered.
By hitting tab you will get autocompletion based on the text you've written so far. By hitting tab twice you'll get suggestions based on the text you've written so far.
For example if I write edit test
and tab twice, I get testFolder/ test.txt
. If I write edit test.
and hit tab my text autocompletes to edit test.txt
Besides working around the file system and installing/uninstalling things, interacting with Git and online repos is probably the most common things you're going to use the terminal for as a developer.
It's a whole lot more efficient to do it from the terminal than by clicking around, so let's take a look at the most useful git commands out there.
git init // output:
Initialized empty Git repository in /home/German/Desktop/testFolder/.git/
git add adds one or more files to staging. You can either detail a specific file to add to staging or add all changed files by typing git add .
git commit commits your changes to the repository. Commits must always be must be accompanied by the -m
flag and commit message.
git commit -m 'This is a test commit' // output:
[master (root-commit) 6101dfe] This is a test commit
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 test.js
git status // output:
On branch master
nothing to commit, working tree clean
git clone https://github.com/coccagerman/MazeGenerator.git // output:
Cloning into 'MazeGenerator'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 15 (delta 1), reused 11 (delta 0), pack-reused 0
Unpacking objects: 100% (15/15), done.
git remote set-url origin
.git remote add origin https://github.com/coccagerman/testRepo.git
Keep in mind you need to create your remote repo first in order to get its URL. We'll see how you can do this from the command line with a little script later on. ;)
git remote -v // output:
origin https://github.com/coccagerman/testRepo.git (fetch)
origin https://github.com/coccagerman/testRepo.git (push)
git push // output:
Counting objects: 2, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 266 bytes | 266.00 KiB/s, done.
Total 2 (delta 0), reused 0 (delta 0)
git branch <branch name>
.git branch // output:
* main
git checkout newBranch // output:
Switched to branch 'newBranch'
If there's new code in your remote repo, the command will return the actual files that were modified in the pull. If not, we get Already up to date
.
git pull // output:
Already up to date.
git diff newBranch // output:
diff --git a/newFileInNewBranch.js b/newFileInNewBranch.js
deleted file mode 100644
index e69de29..0000000
As a side comment, when comparing differences between branches or repos, ussually visual tools like Meld are used. It's not that you can't visualize it directly in the terminal, but this tools are greate for a clearer visualization.
git merge newBranch // output:
Updating f15cf51..3a3d62f
Fast-forward
newFileInNewBranch.js | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 newFileInNewBranch.js
git log // output:
commit 3a3d62fe7cea7c09403c048e971a5172459d0948 (HEAD -> main, tag: TestTag, origin/main, newBranch)
Author: German Cocca <german.cocca@avature.net>
Date: Fri Apr 1 18:48:20 2022 -0300
Added new file
commit f15cf515dd3ec398210108dce092debf26ff9e12
Author: German Cocca <german.cocca@avature.net>
...
git diff --help // output:
GIT-DIFF(1) Git Manual GIT-DIFF(1)
NAME
git-diff - Show changes between commits, commit and working tree, etc
SYNOPSIS
git diff [options] [<commit>] [--] [<path>...]
git diff [options] --cached [<commit>] [--] [<path>...]
...
Our first script
Now we're ready to get to the truly fun and awesome part of the command line, scripting!
As I mentioned previously, a script is nothing more than a series of commands or instructions that we can execute at any given time. To explain how we can code one, we'll use a simple example that will allow us to create a github repo by running a single command. ;)
First thing to do is create a .sh
file. You can put it wherever want. I called mine newGhRepo.sh
.
Then open it on your text/code editor of choice.
On our first line, we'll write the following: #! /bin/sh
This is called a shebang, and its function is to declare what shell is going to run this script.
Remember previously when we mentioned that we can use a given shell for general interaction and another given shell for executing a script? Well, the shebang is the instruction that dictates what shell runs the script.
As mentioned too, we're using a "stripped down" shell (also known as sh shells) to run the scripts as they're more efficient (though the difference might be unnoticeable to be honest, It's just a personal preference). In my computer I have dash as my sh shell.
If we wanted this script to run with bash the shebang would be #! /bin/bash
repoName=$1
Here we're declaring a variable called repoName, and assigning it to the value of the first parameter the script receives.
A parameter is a set of characters that is entered after the script/comand. Like with the cd
command, we need to specify a directory parameter in order to change directory (ie: cd testFolder
).
A way we can identify parameters within a script is by using dollar sign and the order in which that parameter is expected.
If I'm expecting more than one parameter I could write:
paramOne=$1
paramTwo=$2
paramThree=$3
...
We can do that like this:
while [ -z "$repoName" ]
do
echo 'Provide a repository name'
read -r -p $'Repository name:' repoName
done
What we're doing here is:
while [ -z "$repoName" ]
)echo 'Provide a repository name'
)read -r -p $'Repository name:' repoName
)echo "# $repoName" >> README.md
git init
git add .
git commit -m "First commit"
This is creating a readme file and writting a single line with the repo name (echo "# $repoName" >> README.md
) and then initializing the git repo and making a first commit.
curl -u coccagerman https://api.github.com/user/repos -d '{"name": "'"$repoName"'", "private":false}'
curl is a command to transfer data from or to a server, using one of the many supported protocols.
Next we're using the -u
flag to declare the user we're creating the repo for (-u coccagerman
).
Next comes the endpoint provided by the GitHub API (https://api.github.com/user/repos
)
And last we're using the -d
flag to pass parameters to this command. In this case we're indicating the repository name (for which we're using our repoName
variable) and setting private
option to false
, since we want our repo to be puiblic.
Lots of other config options are available in the API, so check the docs for more info.
If you don't have a private token yet, you can generate it in GitHub in Settings > Developer settings > Personal access tokens
To get that we're going to use curl and the GitHub API again, like this:
GIT_URL=$(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/coccagerman/"$repoName" | jq -r '.clone_url')
Here we're declaring a variable called GIT_URL
and assigning it to whatever the following command returns.
The -H
flag sets the header of our request.
Then we pass the GitHub API endpoint, which should contain our user name and repo name (https://api.github.com/repos/coccagerman/"$repoName"
).
Then we're piping the return value of our request. Piping just means passing the return value of a process as the input value of another process. We can do it with the |
symbol like <process1> | <process2>
.
And finally we run the jq
command, which is a tool for processing JSON inputs. Here we tell it to get the value of .clone_url
which is where our remote git URL will be according to the data format provided by the GitHub API.
git branch -M main
git remote add origin $GIT_URL
git push -u origin main
Our full script should look something like this:
#! /bin/sh
repoName=$1
while [ -z "$repoName" ]
do
echo 'Provide a repository name'
read -r -p $'Repository name:' repoName
done
echo "# $repoName" >> README.md
git init
git add .
git commit -m "First commit"
curl -u <yourUserName> https://api.github.com/user/repos -d '{"name": "'"$repoName"'", "private":false}'
GIT_URL=$(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/<yourUserName>/"$repoName" | jq -r '.clone_url')
git branch -M main
git remote add origin $GIT_URL
git push -u origin main
One option is to enter the shell name and pass the file as parameter, like: dash ../ger/code/projects/scripts/newGhRepo.sh
.
And the other is to make the file executable by running chmod u+x ../ger/code/projects/scripts/newGhRepo.sh
.
Then you can just execute the file directly by running ../ger/code/projects/scripts/newGhRepo.sh
.
And that's it! We have our script up and running. Everytime we need a new repo we can just execute this script from whatever directory we're in.
But there's something a bit annoying about this. We need to remember the exact route of the script directory. Wouldn't it be cool to execute the script with a single command that it's always the same independently of what directory we're at?
In come bash aliases to solve our problem.
Aliases are a way bash provides for making names for exact commands we want to run.
To create a new alias, we need to edit the bash configuration files in our system. This files are normally located in the home directory. Aliases can be defined in different files (mainly .bashrc
or .bash_aliases
).
I have a .bash_aliases
file on my system, so let's edit that.
In our CLI we enter cd
to go over home directory.
Then we can enter ls -a
to list all files (includen hidden ones) and check if we have either a .bashrc
or .bash_aliases
file in our system.
We open the file with our text/code editor of choice.
And we write our new alias like this:alias newghrepo="dash /home/German/Desktop/ger/code/projects/scripts/newGhRepo.sh"
Here I'm declaring the alias name, the actual command I'm going to enter to run the script (newghrepo
).
And between quotes, define what that alias is going to do ("dash /home/German/Desktop/ger/code/projects/scripts/newGhRepo.sh"
)
See that I'm passing the absolute path of the script, so that this command works the same no matter what my current directory is.
If you don't know what the absolute path of your script is, go to the script directory on your terminal and enter readlink -f newGhRepo.sh
. That should return the full path for you. ;)
newghrepo
, no matter in what directory we currently are. Much quicker than opening the browser and clicking around to create our repo! =DI hope this gives you a little taste of the kind of optimizations that are possible with scripting. It certainly requires a bit more work the first time you write, test, and set up the script. But after that, you'll never have to perform that task manually again. ;)
Round up
The terminal can feel like an intimidating and intricate place when you're starting out. But it's certainly worth it to put time and effort into learning the ins and outs of it. The efficiency benefits are too good to pass up!
Original article source at https://www.freecodecamp.org
#linux #commandline #cli #console #terminal #shell
1655330160
Example utilizing Prometheus and Grafana to monitor the casper-node.
NOTE: This is just an example and is not recommeneded for production useage.
Requires:
This will add the supplied IP Address and port as a target to prometheus.
ie. ./setup_casper_monitoring.sh <node_ip> <node_port>
docker-compose up -d
docker-compose down
Note: Assumes you are browsing from the box running the containers. Otherwise update
localhost to an IP Address.
http://localhost:3000
http://localhost:9090
There is an example of utilizing node-exporter for scraping host level metrics.
This requires node-exporter be setup on the target host. By default it is setup on
port 9100. The grafana dashboard will report No data
for some panels if not used.
See: https://github.com/prometheus/node_exporter for more details.
To view all metrics associated with a job, browse to http://localhost:9090/graph
and search
for the job in question.
Dashboard can be updated by modifying ./monitoring/grafana/dashboards/casper-dashboard.json
and restarting the containers.
Download Details:
Author: casper-network
Source Code: https://github.com/casper-network/casper-monitoring
License:
#casper #blockchain #smartcontract #shell
1655300280
A protocol release for a casper network is a combination of a casper-node binary release and network configuration. The current scripts in casper-node-launcher packages expect these to be packaged as .tar.gz files.
In the file system a protocol version is represented with underscores _
instead of periods .
.
For example: protocol version 1.0.0
will use 1_0_0
for files paths.
Branches on the repo will be associated with networks the protocol are targeted towards. This has a disadvantage of requiring merging of CI changes to main
into all network branches.
Public networks: casper
and casper-test
Tags for a given branch should be prefixed by the branch name.
For example:
casper
branch should look like casper-1.0.0
for protocol 1.0.0
.casper-test
branch should look like casper-test-1.1.2
for protocol 1.1.2
.When a node is setup, the casper-node-launcher
package is installed. This contains configuration files for casper
and casper-test
networks. They are installed in /etc/casper/network_configs
.
Information about these files can be found in the casper-node-launcer repo.
The two values of interest are SOURCE_URL
and NETWORK_NAME
. This is the url of the web server hosting files and the path to that network specific files. For example, MainNet is genesis.casperlabs.io/casper
.
Within this location, there will be a protocol_versions
file. This is plain text with one protocol version per line with underscore format. Such as:
1_0_0
1_1_2
1_2_1
1_3_0
An entry should exist for every protocol version that is needed to currently sync to the network. For each entry, a directory should exist with the same underscore protocol name. This will hold bin.tar.gz
and config.tar.gz
. As more host systems are supported, we may expand to include bin_rpm.tar.gz
and others.
While not present on casper or casper-test hosting, because these network configs are included with casper-node-launcher
, a different network should offer the conf file for that network in the same location as protocol_versions
.
The integration-test
network has an integration-test.conf
hosted in the root of its staging directory. This would be pulled down directly into the network_configs
directory of the node, so it could be used with commands.
cd /etc/casper/network_configs
sudo -u casper curl -JLO [url]/[network_name]/[network name].conf
Loading all protocols for a given network is simply sudo -u casper /etc/casper/node_util.py stage_protocol [conf filename]
.
To finish off our example, we will list a full directory tree of our exampled-test
network.
example-test/
example-test.conf
protocol_versions
1_0_0/
bin.tar.gz
config.tar.gz
1_1_2/
bin.tar.gz
config.tar.gz
1_2_1/
bin.tar.gz
config.tar.gz
1_3_0/
bin.tar.gz
config.tar.gz
This group of files should be installed on the server in /etc/casper/[protocol_version]/
. This is done as part of /etc/casper/node_util.py stage_protocols
distributed with casper-node-launcher
package.
This is a system agnostic configuration files for a protocol release. The starting protocol version of 1.0.0 requires accounts.toml
to initialize accounts at genesis. This file should not exist for any version past the genesis 1.0.0 protocol.
chainspec.toml
is the configuration for the network. This must be the same for all nodes on the network to continue with consensus. Activation point for genesis is a timestamp, otherwise it is an Era ID. Protocol version must match the version used for staging directory.
config-example.toml
is the default configuration with a location to drop in the node's IP address to create a config.toml
file on the server. This is done automatically with the node_util.py
script distributed with casper-node-launcher
packages. The big change needed for this with a new network is the known_address list which should have some or all of the genesis node IPs.
Other files may be included with config.tar.gz
as needed for upgrade or additional functionality of the system. For example: global_state.toml
can be used at an upgrade to modify something in global state.
While config.tar.gz
is built via tagging on this repo in CI, they could be build manually for another network. This should be archived without a directory structure.
mkdir config
cp [path of]/config.example.toml ./config
cp [path of]/chainspec.toml ./config
cd config
tar -czvf ../config.tar.gz .
This file would be hosted in the [url][network_name][underscore protocol version]
directory of the staging location, where the protocol version matches that defined in the chainspec.toml
file.
The bin.tar.gz
package holds a casper-node
binary compiled for Ubuntu 18.04.
In addition to the appropriate casper-node
binary a README.md
file is included which identifies both the platform targeted, and the github source for compilation.
Note: Because of similarities to binary versions and protocol versions with MainNet casper
network, it should be noted that the protocol version has no correlation to the casper-node
binary version. If a new network was created, the 1.0.0
protocol should most likely use the latest viable binary version.
This file is created as part of a casper-node release and generally can be pulled directly from there for hosting.
For example: https://github.com/casper-network/casper-node/releases/tag/v1.4.5 holds bin.tar.gz
as a release artifact. This would be pulled down and hosted for a network protocol.
To manually package this you could minimally:
mkdir bin
cp [path of]/casper-node ./bin
cd bin
tar -czvf ../bin.tar.gz .
Where casper-node is compiled binary of casper-node targeting 18.04 Ubuntu.
Download Details:
Author: casper-network
Source Code: https://github.com/casper-network/casper-protocol-release
License: Apache-2.0 license
#casper #blockchain #smartcontract #shell
1655127960
Vanpool Manager
Vanpool Manager is a web application for managing registered vanpool riders for each day and direction. The frontend is a Vue.js app in the web
folder. The backend is a Go API. Data is persisted in a PostgreSQL database. Session state is maintained in a cookie but will be moved to Redis soon. Authentication is handled by Azure AD.
Scripts for build and deploy are in the scripts
folder. See .env.tpl
to see and set needed environment variables. In Azure App Service you'll need to set these as App Settings.
The scripts/deploy.sh
script creates ACR, Postgres, and Web App resources in Azure and pushes the built container from this repo to the Web App.
See github.com/joshgav/azure-dapp for guidance on how to deploy this to Kubernetes or AKS.
Author: joshgav
Source Code: https://github.com/joshgav/vanpool-manager
License: View license
1655113140
Project pod tato Head
Podtato-head is a prototypical cloud-native application built to colorfully demonstrate delivery scenarios using many different tools and services. It is intended to help application delivery support teams test and decide which of these to use.
The app comprises a set of microservices in podtato-head-microservices
and a set of examples demonstrating how to deliver them in delivery
. The services are defined with as little additional logic as possible to enable you to focus on the delivery mechanisms themselves.
Find the following set of delivery scenarios in the delivery
directory. Each example scenario delivers the same end result: an API service which communicates with other API services and returns HTML composed of all their responses.
Each delivery scenario includes a walkthrough (README.md) describing how to a) install required supporting infrastructure; b) deliver podtato-head using the infrastructure; and c) test that podtato-head is operating as expected.
Each delivery scenario also includes a test (test.sh) which automates the steps described in the walkthrough.
"Single" deployment means the action effects the state of the resources only once at the time of invocation. "GitOps" deployments mean the action checks the desired state periodically and reconciles it as needed.
The following scenarios have not yet been updated for the multi-service app:
Here's how to extend podtato-head for your own purposes or to contribute to the shared repo.
podtato-head's services themselves are written in Go; entry points are in podtato-head-microservices/cmd
. The entry point to the app is defined in cmd/entry
and a base for each of the app's downstream services is defined in cmd/parts
.
HTTP handlers and other shared functionality is defined in podtato-head-microservices/pkg
.
To run local tests on the Go code, run make podtato-head-verify
.
Build an image for each part - entry, hat, each arm and each leg - with make build-images
.
NOTE: To apply capabilities like image scans and signatures install required binaries first by running
[sudo] make install-requirements
.
To test the built images you'll need to push them to a registry so that Kubernetes can find them. make push-microservices-images
can do this for GitHub's container registry if you are authorized to push to the target repo (as described next).
To push to your own fork of the podtato-head repo:
Fork podtato-head if you haven't already
Create a personal access token (PAT) with write:packages
permissions and copy it
Set and export env vars GITHUB_USER
to your GitHub username and GITHUB_TOKEN
to the PAT, for example as follows:
export GITHUB_USER=joshgav
export GITHUB_TOKEN=goobledygook
NOTE: You can also put env vars in the
.env
file in the repo's root; be sure not to include those updates in commits.
To test the built images as running services in a cluster, run make test-services
. This spins up a cluster using kind
and deploys the services using the kubectl
delivery scenario test.
These tests also rely on your GITHUB_USER and GITHUB_TOKEN env vars if you're using your own fork.
NOTE: The test-services
tasks isn't bound to the push-images
task so that they may be run separately. Make sure you run make push-images
first.
All delivery scenarios are expected to run on any functional Kubernetes cluster with cluster-admin access. That is, if you can run kubectl get pods -n kube-system
you should be able to run any of the tests.
If you don't have a local Kubernetes cluster for tests, kind is one to consider.
NOTE: If you use a cluster without support for LoadBalancer-type services, *which is typical for test clusters like kind, you may need to replace *attributes which default to LoadBalancer
with NodePort
or ClusterIP
.
For example:
# update type property in `service` resources
find delivery -type f -name "*.yaml" -print0 | xargs -0 sed -i 's/type: LoadBalancer/type: NodePort/g'
# update custom serviceType property in Helm values file
find delivery -type f -name "*.yaml" -print0 | xargs -0 sed -i 's/serviceType: LoadBalancer/serviceType: NodePort/g'
See CONTRIBUTING.md.
Author: Podtato-head
Source Code: https://github.com/podtato-head/podtato-head
License: Apache-2.0 license
1654967880
Scripts will install and run a Chainlink Node complete with Postgresql DB and the Elrond Chainlink Connector. The Chainlink Node and Elrond Adapter will run as system services that can be turned on/off independently.
Before installing/running be sure to add a valid owner.pem
to the script PEM folder.
PEM
inside the scripts folderowner.pem
file inside the previously created folder[FIRST RUN]
./script.sh install
- installs the everything needed to run the elrond-adapter;
[START]
./script.sh start_chainlink
- starts the Chainlink Node; ./script.sh start_adapter
- starts the Elrond Adapter;
[STOP]
./script.sh stop_chainlink
- stops the Chainlink Node; ./script.sh stop_adapter
- stops the Elrond Adapter;
[CLEANUP]
./script.sh clean
- #Removes installed packages;
POST /write
endpointSends transaction and writes the request data to the Elrond network
Input:
{
"id": "bbfd3e3a8aed4d46abb0a89764951bf9",
"data": {
"value": "15051",
"data": {},
"sc_address": "erd1...",
"function": "submit_endpoint",
"round_id": "145"
}
}
Output:
{
"jobRunID": "bbfd3e3a8aed4d46abb0a89764951bf9",
"data": {
"result": "19feccf4b8590bcc9554ad632ff23f8344d0318fbac643bdba5fa7a605373bf4"
},
"result": "19feccf4b8590bcc9554ad632ff23f8344d0318fbac643bdba5fa7a605373bf4",
"statusCode": 200
}
POST /price-job
endpointStarts a price feed job which aggregates feeds from multiple sources and pushes data in the aggregator smart contract
Data body can be left empty, it reads input values from config.toml
Input:
{
"id": "bbfd3e3a8aed4d46abb0a89764951bf9",
"data": {}
}
Output:
{
"jobRunID": "bbfd3e3a8aed4d46abb0a89764951bf9",
"data": {
"result": {
"txHashes": [
"25d1731151692cd75aa605dcad376c6acf0cd22d6fe0a1ea50a8e2cd25c16f27",
"f95060ff47bc676f63a72cc5a51ead7ebbb1a21131d60e2273d5148a2fea3d95",
"3a3092ba6bf49ad54afbdb2b08efa91b6b024e25753797dee675091c9b8f1891",
"102ff3ef391cb4c53de2b9c672a98a4dca0c93da53be7255c827c60c8da029d3",
"9c0c4c1ab8372efc21c4bbcadfc79162564e9895c91f73d942cb96be53ddd27e"
]
}
},
"result": {
"txHashes": [
"25d1731151692cd75aa605dcad376c6acf0cd22d6fe0a1ea50a8e2cd25c16f27",
"f95060ff47bc676f63a72cc5a51ead7ebbb1a21131d60e2273d5148a2fea3d95",
"3a3092ba6bf49ad54afbdb2b08efa91b6b024e25753797dee675091c9b8f1891",
"102ff3ef391cb4c53de2b9c672a98a4dca0c93da53be7255c827c60c8da029d3",
"9c0c4c1ab8372efc21c4bbcadfc79162564e9895c91f73d942cb96be53ddd27e"
]
},
"statusCode": 200
}
POST /ethgas/denominate
endpointFetched latest eth gas prices, in gwei and denominates the value in a specified asset. e.g GWEI/EGLD
Data body can be left empty, it reads input values from config.toml
Input:
{
"id": "bbfd3e3a8aed4d46abb0a89764951bf9",
"data": {}
}
Output:
{
"jobRunID": "bbfd3e3a8aed4d46abb0a89764951bf9",
"data": {
"result": "19feccf4b8590bcc9554ad632ff23f8344d0318fbac643bdba5fa7a605373bf4"
},
"result": "19feccf4b8590bcc9554ad632ff23f8344d0318fbac643bdba5fa7a605373bf4",
"statusCode": 200
}
Download Details:
Author: ElrondNetwork
Source Code: https://github.com/ElrondNetwork/elrond-chainlink-scripts
License:
1654868668
mongosh
This repository is a monorepo for all the various components in the MongoDB Shell across all environments (REPL, Browser, Compass, etc).
MongoDB Shell works with MongoDB servers >= 4.0.
You can get the release tarball from our Downloads Page. We currently maintain MongoDB Shell on three different platforms - Windows (zip), MacOS (zip) and Linux (tgz, deb and rpm). Once downloaded, you will have to extract the binary and add it to your PATH variable. For detailed instructions for each of our supported platforms, please visit installation documentation.
$ mongosh [options] [db address] [file names (ending in .js or .mongodb)]
Options:
-h, --help Show this usage information
-f, --file [arg] Load the specified mongosh script
--host [arg] Server to connect to
--port [arg] Port to connect to
--version Show version information
--verbose Increase the verbosity of the output of the shell
--quiet Silence output from the shell during the connection process
--shell Run the shell after executing files
--nodb Don't connect to mongod on startup - no 'db address' [arg] expected
--norc Will not run the '.mongoshrc.js' file on start up
--eval [arg] Evaluate javascript
--retryWrites=[true|false] Automatically retry write operations upon transient network errors (Default: true)
Authentication Options:
-u, --username [arg] Username for authentication
-p, --password [arg] Password for authentication
--authenticationDatabase [arg] User source (defaults to dbname)
--authenticationMechanism [arg] Authentication mechanism
--awsIamSessionToken [arg] AWS IAM Temporary Session Token ID
TLS Options:
--tls Use TLS for all connections
--tlsCertificateKeyFile [arg] PEM certificate/key file for TLS
--tlsCertificateKeyFilePassword [arg] Password for key in PEM file for TLS
--tlsCAFile [arg] Certificate Authority file for TLS
--tlsAllowInvalidHostnames Allow connections to servers with non-matching hostnames
--tlsAllowInvalidCertificates Allow connections to servers with invalid certificates
--tlsCertificateSelector [arg] TLS Certificate in system store (Windows and macOS only)
--tlsCRLFile [arg] Specifies the .pem file that contains the Certificate Revocation List
--tlsDisabledProtocols [arg] Comma separated list of TLS protocols to disable [TLS1_0,TLS1_1,TLS1_2]
--tlsUseSystemCA Load the operating system trusted certificate list
API version options:
--apiVersion [arg] Specifies the API version to connect with
--apiStrict Use strict API version mode
--apiDeprecationErrors Fail deprecated commands for the specified API version
FLE Options:
--awsAccessKeyId [arg] AWS Access Key for FLE Amazon KMS
--awsSecretAccessKey [arg] AWS Secret Key for FLE Amazon KMS
--awsSessionToken [arg] Optional AWS Session Token ID
--keyVaultNamespace [arg] database.collection to store encrypted FLE parameters
--kmsURL [arg] Test parameter to override the URL of the KMS endpoint
DB Address Examples:
foo Foo database on local machine
192.168.0.5/foo Foo database on 192.168.0.5 machine
192.168.0.5:9999/foo Foo database on 192.168.0.5 machine on port 9999
mongodb://192.168.0.5:9999/foo Connection string URI can also be used
File Names:
A list of files to run. Files must end in .js and will exit after unless --shell is specified.
Examples:
Start mongosh using 'ships' database on specified connection string:
$ mongosh mongodb://192.168.0.5:9999/ships
For more information on usage: https://docs.mongodb.com/mongodb-shell.
pip3 install mtools[mlaunch]
if the automatic installation causes any trouble.npm install -g lerna
npm install -g typescript
npm run bootstrap
Run all tests (this may take some time):
npm test
Run tests from a specific package:
lerna run test --scope @mongosh/cli-repl
Run tests with all output from packages:
lerna run test --stream
To test against a specific version, the MONGOSH_SERVER_TEST_VERSION
environment variable can be set to a semver string specifying a server version.
Via npm:
npm run start
Alternatively you can also run start inside the cli-repl
package, if you're sure everything else is compiled:
cd packages/cli-repl && npm run start
Compile all Typescript:
npm run compile-ts
Compile just the CLI:
npm run compile-cli
Compile the standalone executable (this may take some time):
npm run compile-exec
Compile a specific package, e.g. the .deb
for Debian:
npm run compile-exec
npm run evergreen-release package -- --build-variant=debian-x64
Refer to the build
package documentation.
For issues, please create a ticket in our JIRA Project.
For contributing, please refer to CONTRIBUTING.md.
Is there anything else youβd like to see in MongoDB Shell? Let us know by submitting suggestions in our feedback forum.
For our official documentation, please visit MongoDB Docs page.
Author: Mongodb-js
Source Code: https://github.com/mongodb-js/mongosh
License: Apache-2.0 license
1654729140
gobrew
gobrew lets you easily switch between multiple versions of go. It is based on rbenv and pyenv.
You can install this via the command line with either curl
or wget
.
via curl
curl -L https://raw.github.com/grobins2/gobrew/master/tools/install.sh | sh
via wget
wget --no-check-certificate https://raw.github.com/grobins2/gobrew/master/tools/install.sh -O - | sh
Check out gobrew where you want it installed.
$ git clone git://github.com/cryptojuice/gobrew.git ~/.gobrew
Add the following to your shell config.
Note:
BASH: Add this to /.bashrc (/.bash_profile for Ubuntu users).
ZSH: Add this to ~/.zshenv
export PATH="$HOME/.gobrew/bin:$PATH"
eval "$(gobrew init -)"
Source your shell config file (or reopen shell session).
: gobrew install
Install a specified version of Go.
$ gobrew install 1.5
: gobrew uninstall
$ gobrew uninstall 1.5
: gobrew use
Sets which version of Go to use globally.
$ gobrew use 1.5
: gobrew workspace
Note: 'gobrew workspace' echos the currently set workspace ($GOPATH). Use 'gobrew workspace set' to set your $GOPATH to the current working directory. Use 'gobrew workspace unset' to remove this setting.
$ cd /path/to/workspace
$ gobrew workspace set
$ gobrew workspace unset
Visit http://golang.org/doc/code.html#Workspaces for more on workspaces.
To upgrade run update script from .gobrew source with: $ cd ~ $ ./.gobrew/tools/upgrade.sh
If you want to uninstall it, just run
$ cd ~
$ ./.gobrew/tools/uninstall.sh
from the command line and itβll remove itself.
Author: Cryptojuice
Source Code: https://github.com/cryptojuice/gobrew
License: MIT license
1653591420
jest-shell-matchers
Test shell scripts while mocking specific commands
Run shell scripts and make assertions about the exit code, stdout, stderr, and termination signal that are generated. It uses the spawn-with-mocks library, so mocks can be written for specific shell commands.
The library exposes asynchronous matchers, so it requires Jest 23 or higher (to run synchronous tests, use spawn-with-mocks directly). Mocks are created by writing temporary files to disk, so they do not work if fs.writeFileSync
is being mocked.
Initialization
const shellMatchers = require('jest-shell-matchers')
beforeAll(() => {
// calling this will add the matchers
// by calling expect.extend
shellMatchers()
})
Example Without Mocks
it('should test the output from a spawned process', async () => {
// this input will be executed by child_process.spawn
const input = ['sh', ['./hello-world.sh']]
const expectedOutput = {
code: 0,
signal: '',
stdout: 'Hello World\n',
stderr: '',
}
// the matcher is asynchronous, so it *must* be awaited
await expect(input).toHaveMatchingSpawnOutput(expectedOutput)
})
Example With Mocks
Mocks are created by spawn-with-mocks, which documents the mocking API. In this example, we mock the date
and mkdir
commands:
const fs = require('fs')
it('should mock the date and mkdir commands', async () => {
fs.writeFileSync(
'./mkdir.sh',
// this example script creates a directory
// that is named for the current date
`
#!/bin/sh
DIR_NAME=$(date +'%m-%d-%Y')
mkdir $DIR_NAME
`)
// Mocking the output
// for the date command
const date = () => {
return {
code: 0,
stdout: '01-06-2019',
stderr: ''
}
}
// Testing the input to mkdir,
// and mocking the output
const mkdir = jest.fn(dir => {
expect(dir).toBe('01-06-2019')
return {
code: 0,
stdout: '',
stderr: ''
}
})
const mocks = { date, mkdir }
const input = ['sh', ['./mkdir.sh'], { mocks }]
await expect(input).toHaveMatchingSpawnOutput(0)
expect(mocks.mkdir).toHaveBeenCalledTimes(1)
fs.unlinkSync('./mkdir.sh')
})
Mocks can also return a Number
or String
to shorten the code:
// The string is shorthand for stdout;
// stderr will be '' and the exit code will be 0
const date = () => '01-06-2019'
// The number is shorthand for the exit code
// stdout and stderr will be ''
const mkdir = dir => 0
expect
with the input for spawn-with-mocks#spawn, which the matchers run internally. It can execute a script, create mocks, set enviroment variables, etc. When passing args
or options
, the input must be wrapped with an array:await expect('ls')
.toHaveMatchingSpawnOutput(/*...*/)
await expect(['sh', ['./test.sh'], { mocks }])
.toHaveMatchingSpawnOutput(/*...*/)
Number
, String
, RegExp
, or Object
.const input = ['sh', ['./test.sh']]
await expect(input)
// Number: test the exit code
.toHaveMatchingSpawnOutput(0)
await expect(input)
// String: test the stdout for an exact match
.toHaveMatchingSpawnOutput('Hello World')
await expect(input)
// RegExp: test the stdout
.toHaveMatchingSpawnOutput(/^Hello/)
await expect(input)
// Object: the values can be Numbers, Strings, or RegExps
.toHaveMatchingSpawnOutput({
// The exit code
code: 0,
// The signal that terminated the proces
// for example, 'SIGTERM' or 'SIGKILL'
signal: '',
// The stdout from the process
stdout: /^Hello/,
// The stderr from the process
stderr: ''
})
Author: Raingerber
Source Code: https://github.com/raingerber/jest-shell-matchers
License: MIT license
1653294808
nostromo
is a CLI to rapidly build declarative aliases making multi-dimensional tools on the fly.
Managing aliases can be tedious and difficult to set up. nostromo
makes this process easy and reliable. The tool adds shortcuts to your .bashrc
/ .zshrc
that call into the nostromo
binary. It reads and manages all aliases within its manifest. This is used to find and execute the actual command as well as swap any substitutions to simplify calls.
nostromo
can help you build complex tools in a declarative way. Tools commonly allow you to run multi-level commands like git rebase master branch
or docker rmi b750fe78269d
which are clear to use. Imagine if you could wrap your aliases / commands / workflow into custom commands that describe things you do often. Well, now you can with nostromo. π€
With nostromo
you can take aliases like these:
alias ios-build='pushd $IOS_REPO_PATH;xcodebuild -workspace Foo.xcworkspace -scheme foo_scheme'
alias ios-test='pushd $IOS_REPO_PATH;xcodebuild -workspace Foo.xcworkspace -scheme foo_test_scheme'
alias android-build='pushd $ANDROID_REPO_PATH;./gradlew build'
alias android-test='pushd $ANDROID_REPO_PATH;./gradlew test'
and turn them into declarative commands like this:
build ios
build android
test ios
test android
The possibilities are endless π and up to your imagination with the ability to compose commands as you see fit.
Check out the examples folder for sample manifests with commands
bash
/ zsh
shells (other combinations untested but may work)Using brew
:
brew tap pokanop/pokanop
brew install nostromo
Using go get
:
go get -u github.com/pokanop/nostromo
This command will initialize nostromo
and create a manifest under ~/.nostromo
:
nostromo init
To customize the directory (and change it from ~/.nostromo
), set the NOSTROMO_HOME
environment variable to a location of your choosing.
With every update, it's a good idea to run
nostromo init
to ensure any manifest changes are migrated and commands continue to work.nostromo
will attempt to perform any migrations as well at this time to files and folders so π€
The quickest way to populate your commands database is using the dock
feature:
nostromo dock <source>
where source
can be any local or remote file sources. See the Distributed Manifests section for more details.
To destroy the core manifest and start over you can always run:
nostromo destroy
Backups of manifests are automatically taken to prevent data loss in case of shenanigans gone wrong. These are located under ${NOSTROMO_HOME}/cargo
. The maximum number of backups can be configured with the backupCount
manifest setting.
nostromo set backupCount 10
Aliases to commands is one of the core features provided by nostromo
. Instead of constantly updating shell profiles manually, nostromo
will automatically keep it updated with the latest additions.
Given that
nostromo
is not a shell command there are some things to note on how it makes its magic:
- Commands are generated by
nostromo
and executed using theeval
method in a shell function.- Commands and changes will be available immediately since
nostromo
reloads completions automaticallyIf you want create boring standard shell aliases you can do that with an additional flag or a config setting described below.
To add an alias (or command in nostromo
parlance), simply run:
nostromo add cmd foo "echo bar"
And just like that you can now run foo
like any other alias.
Descriptions for your commands can easily be added as well:
nostromo add cmd foo "echo bar" -d "My magical foo command that prints bar"
Your descriptions will show up in the shell when autocompleting!
You can also add commands and substitutions interactively by using just nostromo add
without any arguments. This command will walk through prompts to guide adding new commands easily.
nostromo
uses the concept of keypaths to simplify building commands and accessing the command tree. A keypath is simply a .
delimited string that represents the path to the command.
For example:
nostromo add cmd foo.bar.baz 'echo hello'
will build the command tree for foo
π bar
π baz
such that any of these commands are now valid (of course the first two do nothing yet π):
foo
foo bar
foo bar baz
where the last one will execute the echo
command.
You can compose several commands together by adding commands at any node of the keypath. The default behavior is to concatenate the commands together as you walk the tree. Targeted use of ;
or &&
can allow for running multiple commands together instead of concatenating. More easily, you can change the command mode
for any of the commands to do this for you automatically. More info on this later.
nostromo
allows users to manage shell aliases. By default, all commands are designed to execute the binary and resolve a command to be evaluated in the shell. This allows you to run those declarative commands easily like foo bar baz
in the shell. It only creates an alias as a shell function for the root command foo
and passes the remaining arguments to nostromo eval
to evaluate the command tree. The result of that is executed with eval
in the shell. Standard shell aliases do not get this behavior.
The use of standard shell aliases provides limited benefit if you only want single tiered aliases. Additionally, commands persist in the shell since they are evaluated (i.e., changing directories via
cd
).
There are two methods for adding aliases to your shell profile that are considered standard aliases:
--alias-only
or -a
flag when using nostromo add cmd
aliasesOnly
config setting to affect all command additionsFor example, you can see both methods here:
nostromo add cmd foo.bar.baz "cd /tmp" --alias-only
nostromo set aliasesOnly true
nostromo add cmd foo.bar.baz "cd /tmp"
Adding a standard alias will produce this line that gets sourced:
alias foo.bar.baz='cd /tmp'
instead of a nostromo
command which adds a shell function:
foo() { eval $(nostromo eval foo "$*") }
Notice how the keypath has no affect in building a command tree when using the alias only feature. Standard shell aliases can only be root level commands.
Scope affects a tree of commands such that a parent scope is prepended first and then each command in the keypath to the root. If a command is run as follows:
foo bar baz
then the command associated with foo
is concatenated first, then bar
, and finally baz
. So if these commands were configured like this:
nostromo add cmd foo 'echo oof'
nostromo add cmd foo.bar 'rab'
nostromo add cmd foo.bar.baz 'zab'
then the actual execution would result in:
echo oof rab zab
Standard behavior is to
concatenate
but you can easily change this with themode
flag when usingadd
or globally. More information under Execution Modes.
nostromo
also provides the ability to add substitutions at each one of these scopes in the command tree. So if you want to shorten common strings that are otherwise long into substitutions, you can attach them to a parent scope and nostromo
will replace them at execution time for all instances.
A substitution can be added with:
nostromo add sub foo.bar //some/long/string sls
Subsequent calls to foo bar
would replace the subs before running. This command:
foo bar baz sls
would finally result in the following since the substitution is in scope:
oof rab zab //some/long/string
Given features like keypaths and scope you can build a complex set of commands and effectively your own tool π€― that performs additive functionality with each command node.
You can get a quick snapshot of the command tree using:
nostromo show
With nostromo, you can also visualize the command tree (or manifest) in several other ways including as json
, yaml
and a tree itself.
Setting the verbose
config setting prints more detailed information as well for all commands.
A command's mode indicates how it will be executed. By default, nostromo
concatenates parent and child commands along the tree. There are 3 modes available to commands:
concatenate Concatenate this command with subcommands exactly as defined independent Execute this command with subcommands using ';' to separate exclusive Execute this and only this command ignoring parent commands
The mode can be set when adding a command with the -m
or --mode
flag:
nostromo add cmd foo.bar.baz -m exclusive "echo baz"
A global setting can also be set to change the mode from the default concatenate
with:
nostromo set mode independent
All subsequent commands would inherit the above mode if set.
nostromo
provides completion scripts to allow tab completion. This is added by default to your shell init file:
eval "$(nostromo completion)"
Even your commands added by nostromo
get the full red carpet treatment with shell completion. Be sure to add a description and tab completion will show hints at each junction of your command. Cool right! π
nostromo
provides the ability to supply code snippets in the following languages for execution, in lieu of the standard shell command:
ruby
- runs ruby interpreterpython
- runs python interpreterjs
- runs nodeperl
- runs perl interpreternostromo add cmd foo --code 'console.log("hello js")' --language js
For more complex snippets you can edit ~/.nostromo/ships/manifest.yaml
directly but multiline YAML must be escaped correctly to work.
nostromo
now supports keeping multiple manifest sources πͺ allowing you to organize and distribute your commands as you please. This feature enables synchronization functionality to get remote manifests from multiple data sources including:
Details on supported file formats and requirements can be found in the go-getter documentation as
nostromo
uses that for downloading files
Configs can be found in the ~/.nostromo/ships
folder. The core manifest is named manifest.yaml
.
You can add as many additional manifests in the same folder and nostromo
will parse and aggregate all the commands, useful for organizations wanting to build their own command suite.
To add or dock manifests, use the following:
nostromo dock <source>...
And that's it! Your commands will now incorporate the new manifest.
To update docked manifests to the latest versions (omit sources to update all manifests), just run:
nostromo sync <name>...
nostromo
syncs manifests using version information in the manifest. It will only update if the version identifier is different. To force update a manifest, run:
nostromo sync -f <name>...
If you're tired of someone else's manifest or it just isn't making you happy βΉοΈ then just undock it with:
nostromo undock <name>
Moving and copying command subtrees can be done easily using nostromo
as well to avoid manual copy pasta with yaml. If you want to move command nodes around just use:
nostromo move cmd <source> <destination>
where the source and destinations are expected to be key paths like foo.bar
.
You can rename a node with:
nostromo rename cmd <source> <name>
Next up, you might want to copy entire nodes around, which can also be done between manifests using copy
. Again use key paths for source
and destination
and nostromo
will attempt to replicate the branch to the new location.
nostromo copy cmd <source> <destination>
So you've created an awesome suite of commands and you like to share, am I right? Well nostromo
makes it super easy to create manifests with any set of your commands from the tree using the detach
command. It lets you slice and dice your manifests by extracting out a command node into a new manifest.
nostromo detach <name> <key.path>...
By default, this removes the command nodes from the manifest but can be kept intact as well with the -k
option. Additionally, detaching any command nodes from a docked manifest may have unwanted side effects when running nostromo sync
again since the commands will likely be added back from the original source.
Since nostromo
updates manifests if the identifier is unique, there might be times you want to update the yaml
files manually for whatever reason. In this case you can run the handy uuidgen
command to update the identifier so you can push the manifest to others:
nostromo uuidgen <name>
nostromo
now supports themes to make it look even more neat. There's 3 themes currently which can be set with:
nostromo set theme <name>
where valid themes include:
default
: The basic theme and previous defaultgrayscale
: Gray colored things are sometimes niceemoji
: The new default obviouslyEnjoy!
π³ππ₯ππππππ΅ππ»πππ₯π‘ππππππππππΌππ©ππ₯
Contributions are what makes the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Author: Pokanop
Source Code: https://github.com/pokanop/nostromo
License: MIT license
1652939700
cmd package
A simple package to execute shell commands on linux, darwin and windows.
$ go get -u github.com/commander-cli/cmd@v1.0.0
c := cmd.NewCommand("echo hello")
err := c.Execute()
if err != nil {
panic(err.Error())
}
fmt.Println(c.Stdout())
fmt.Println(c.Stderr())
To configure the command a option function will be passed which receives the command object as an argument passed by reference.
Default option functions:
cmd.WithStandardStreams
cmd.WithCustomStdout(...io.Writers)
cmd.WithCustomStderr(...io.Writers)
cmd.WithTimeout(time.Duration)
cmd.WithoutTimeout
cmd.WithWorkingDir(string)
cmd.WithEnvironmentVariables(cmd.EnvVars)
cmd.WithInheritedEnvironment(cmd.EnvVars)
c := cmd.NewCommand("echo hello", cmd.WithStandardStreams)
c.Execute()
setWorkingDir := func (c *Command) {
c.WorkingDir = "/tmp/test"
}
c := cmd.NewCommand("pwd", setWorkingDir)
c.Execute()
You can catch output streams to stdout
and stderr
with cmd.CaptureStandardOut
.
// caputred is the captured output from all executed source code
// fnResult contains the result of the executed function
captured, fnResult := cmd.CaptureStandardOut(func() interface{} {
c := NewCommand("echo hello", cmd.WithStandardStream)
err := c.Execute()
return err
})
// prints "hello"
fmt.Println(captured)
make test
c.Stdout()
and c.Stderr()
Author: Commander-cli
Source Code: https://github.com/commander-cli/cmd
License: MIT license
1652773414
direnv
is an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory.
Before each prompt, direnv checks for the existence of a .envrc
file (and optionally a .env
file) in the current and parent directories. If the file exists (and is authorized), it is loaded into a bash sub-shell and all exported variables are then captured by direnv and then made available to the current shell.
It supports hooks for all the common shells like bash, zsh, tcsh and fish. This allows project-specific environment variables without cluttering the ~/.profile
file.
Because direnv is compiled into a single static executable, it is fast enough to be unnoticeable on each prompt. It is also language-agnostic and can be used to build solutions similar to rbenv, pyenv and phpenv.
Now restart your shell.
To follow along in your shell once direnv is installed.
# Create a new folder for demo purposes.
$ mkdir ~/my-project
$ cd ~/my-project
# Show that the FOO environment variable is not loaded.
$ echo ${FOO-nope}
nope
# Create a new .envrc. This file is bash code that is going to be loaded by
# direnv.
$ echo export FOO=foo > .envrc
.envrc is not allowed
# The security mechanism didn't allow to load the .envrc. Since we trust it,
# let's allow its execution.
$ direnv allow .
direnv: reloading
direnv: loading .envrc
direnv export: +FOO
# Show that the FOO environment variable is loaded.
$ echo ${FOO-nope}
foo
# Exit the project
$ cd ..
direnv: unloading
# And now FOO is unset again
$ echo ${FOO-nope}
nope
Exporting variables by hand is a bit repetitive so direnv provides a set of utility functions that are made available in the context of the .envrc
file.
As an example, the PATH_add
function is used to expand and prepend a path to the $PATH environment variable. Instead of export PATH=$PWD/bin:$PATH
you can write PATH_add bin
. It's shorter and avoids a common mistake where $PATH=bin
.
To find the documentation for all available functions check the direnv-stdlib(1) man page.
It's also possible to create your own extensions by creating a bash file at ~/.config/direnv/direnvrc
or ~/.config/direnv/lib/*.sh
. This file is loaded before your .envrc
and thus allows you to make your own extensions to direnv.
Note that this functionality is not supported in .env
files. If the coexistence of both is needed, one can use .envrc
for leveraging stdlib and append dotenv
at the end of it to instruct direnv to also read the .env
file next.
Make sure to take a look at the wiki! It contains all sorts of useful information such as common recipes, editor integration, tips-and-tricks.
Based on GitHub issues interactions, here are the top things that have been confusing for users:
direnv has a standard library of functions, a collection of utilities that I found useful to have and accumulated over the years. You can find it here: https://github.com/direnv/direnv/blob/master/stdlib.sh
It's possible to override the stdlib with your own set of function by adding a bash file to ~/.config/direnv/direnvrc
. This file is loaded and it's content made available to any .envrc
file.
direnv is not loading the .envrc
into the current shell. It's creating a new bash sub-process to load the stdlib, direnvrc and .envrc
, and only exports the environment diff back to the original shell. This allows direnv to record the environment changes accurately and also work with all sorts of shells. It also means that aliases and functions are not exportable right now.
Bug reports, contributions and forks are welcome. All bugs or other forms of discussion happen on http://github.com/direnv/direnv/issues .
Or drop by on Matrix to have a chat. If you ask a question make sure to stay around as not everyone is active all day.
Here is a list of projects you might want to look into if you are using direnv.
use_nix
implementation.Here is a list of other projects found in the same design space. Feel free to submit new ones.
Download Details:
Author: direnv
Source Code: https://github.com/direnv/direnv
License: MIT
#python #shell #bash
1652493060
This is the ULTIMATE Guide to Managing your Linux History! Let's learn about Bash Shell and Linux History and the how to check your command history using Ubuntu or other Linux Distros. Learn how to reuse Linux Commands.
History files are saved by user so each bash shell user has their own history file located in their home user directory. Now this is a hidden file (.bash_history) and thatβs why it contains a dot before the name of the file. Access the last set of commands used by using the history command and review your entire history by looking at the stored history file.
00:00 Linux History
01:54 Reissue History
02:45 Search History
03:43 Delete History Lines
04:17 History Shortcut
05:12 History File
07:01 History Config
08:26 Customizing Config
09:42 Clearing History
1652332260
sh
A shell parser, formatter, and interpreter. Supports POSIX Shell, Bash, and mksh. Requires Go 1.17 or later.
To parse shell scripts, inspect them, and print them out, see the syntax examples.
For high-level operations like performing shell expansions on strings, see the shell examples.
go install mvdan.cc/sh/v3/cmd/shfmt@latest
shfmt
formats shell programs. See canonical.sh for a quick look at its default style. For example:
shfmt -l -w script.sh
For more information, see its manpage, which can be viewed directly as Markdown or rendered with scdoc.
Packages are available on Alpine, Arch, Debian, Docker, FreeBSD, Homebrew, MacPorts, NixOS, Scoop, Snapcraft, Void and webi.
go install mvdan.cc/sh/v3/cmd/gosh@latest
Proof of concept shell that uses interp
. Note that it's not meant to replace a POSIX shell at the moment, and its options are intentionally minimalistic.
We use Go's native fuzzing support, which requires Go 1.18 or later. For instance:
cd syntax
go test -run=- -fuzz=ParsePrint
$ echo '${array[spaced string]}' | shfmt
1:16: not a valid arithmetic operator: string
$ echo '${array[dash-string]}' | shfmt
${array[dash - string]}
$((
and ((
ambiguity is not supported. Backtracking would complicate the parser and make streaming support via io.Reader
impossible. The POSIX spec recommends to space the operands if $( (
is meant.$ echo '$((foo); (bar))' | shfmt
1:1: reached ) without matching $(( with ))
export
and let
are parsed as keywords. This allows statically building their syntax tree, as opposed to keeping the arguments as a slice of words. It is also required to support declare foo=(bar)
. Note that this means expansions like declare {a,b}=c
are not supported.A subset of the Go packages are available as an npm package called mvdan-sh. See the _js directory for more information.
To build a Docker image, checkout a specific version of the repository and run:
docker build -t my:tag -f cmd/shfmt/Dockerfile .
This creates an image that only includes shfmt. Alternatively, if you want an image that includes alpine, add --target alpine
. To use the Docker image, run:
docker run --rm -u "$(id -u):$(id -g)" -v "$PWD:/mnt" -w /mnt my:tag <shfmt arguments>
The following editor integrations wrap shfmt
:
shell script
pluginOther noteworthy integrations include:
Author: Mvdan
Source Code: https://github.com/mvdan/sh
License: BSD-3-Clause license