Oral  Brekke

Oral Brekke

1655954820

Kubectl Completion for Fish Shell

kubectl completion for fish shell

Install

$ mkdir -p ~/.config/fish/completions
$ cd ~/.config/fish
$ git clone https://github.com/evanlucas/fish-kubectl-completions
$ ln -s ../fish-kubectl-completions/completions/kubectl.fish completions/

Install using Fisher

fisher install evanlucas/fish-kubectl-completions

Building

This was tested using go 1.15.7 on macOS 11.1 "Big Sur".

$ make build

Environment Variables

FISH_KUBECTL_COMPLETION_TIMEOUT

This is used to pass the --request-timeout flag to the kubectl command. It defaults to 5s.

Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.

FISH_KUBECTL_COMPLETION_COMPLETE_CRDS

This can be used to prevent completing CRDs. Some users may have limited access to resources. It defaults to 1. To disable, set to anything other than 1.

Author: Evanlucas
Source Code: https://github.com/evanlucas/fish-kubectl-completions 
License: MIT license

#node #kubernetes #shell 

Kubectl Completion for Fish Shell
Thomas  Granger

Thomas Granger

1655694711

The Command Line (CLI, Console, Terminal or Shell) Explained

What is the Command Line? (Linux Zero to Hero)

Beginner friendly first look at the Linux command line. We'll cover some fundamentals about what it is, how it works, man pages, basic commands, directory navigation, file manipulation, and more.


Command Line for Beginners – How to Use the Terminal Like a Pro

In this article we'll take a good look at the command line (also known as the CLI, console, terminal or shell).

The command line is one of the most useful and efficient tools we have as developers and as computer users in general. But using it can feel a bit overwhelming and complex when you're starting out.

In this article I'll try my best to simply explain the parts that make up the command line interface, and the basics of how it works, so you can start using it for your daily tasks.

Table of Contents

  • Difference between console, terminal, command line (CLI) and Shell
    • Console
    • Terminal
    • Shell
    • Command line (CLI)
  • Why should I even care about using the terminal?
  • Different kinds of shells
    • A bit of history - Posix
    • How do I know what shell I'm running?
    • What shell is better?
      • A comment about customization
  • Most common and useful commands to use
    • Git commands
  • Our first script
  • Round up

Difference between console, command line (CLI), terminal and Shell

I think a good place to start is to know exactly what the command line is.

When referring to this, you may have heard the terms Terminal, console, command line, CLI, and shell. People often use these words interchangeably but the truth is they're actually different things.

Differentiating each isn't necesarilly crucial knwoledge to have, but it will help clarify things. So lets briefly explain each one.

Console:

The console is the physical device that allows you to interact with the computer.

In plain English, it's your computer screen, keyboard, and mouse. As a user, you interact with your computer through your console.

image_13b2c80d-a2d6-4429-8ca6-f053340897cc

Terminal:

A terminal is a text input and output environment. It is a program that acts as a wrapper and allows us to enter commands that the computer processes.

In plain English again, it's the "window" in which you enter the actual commands your computer will process.

terminal

Keep in mind the terminal is a program, just like any other. And like any program, you can install it and uninstall it as you please. It's also possible to have many terminals installed in your computer and run whichever you want whenever you want.

All operating systems come with a default terminal installed, but there are many options out there to choose from, each with its own functionalities and features.

Shell:

A shell is a program that acts as command-line interpreter. It processes commands and outputs the results. It interprets and processes the commands entered by the user.

Same as the terminal, the shell is a program that comes by default in all operating systems, but can also be installed and uninstalled by the user.

Different shells come with different syntax and characteristics as well. It's also possible to have many shells installed at your computer and run each one whenever you want.

In most Linux and Mac operating systems the default shell is Bash. While on Windows it's Powershell. Some other common examples of shells are Zsh and Fish.

Shells work also as programming languages, in the sense that with them we can build scripts to make our computer execute a certain task. Scripts are nothing more than a series of instructions (commands) that we can save on a file and later on execute whenever we want.

We'll take a look at scripts later on in this article. For now just keep in mind that the shell is the program your computer uses to "understand" and execute your commands, and that you can also use it to program tasks.

Also keep in mind that the terminal is the program in which the shell will run. But both programs are independent. That means, I can have any shell run on any terminal. There's no dependance between both programs in that sense.

Command line or CLI (command line interface):

The CLI is the interface in which we enter commands for the computer to process. In plain English once again, it's the space in which you enter the commands the computer will process.

cli

This is practically the same as the terminal and in my opinion these terms can be used interchangeably.

One interesting thing to mention here is that most operating systems have two different types of interfaces:

  • The CLI, which takes commands as inputs in order for the computer to execute tasks.
  • The other is the GUI (graphical user interface), in which the user can see things on the screen and click on them and the computer will respond to those events by executing the corresponding task.

Why should I even care about using the terminal?

We just mentioned that most operating systems come with a GUI. So if we can see things on the screen and click around to do whatever we want, you might wonder why you should learn this complicated terminal/cli/shell thing?

The first reason is that for many tasks, it's just more efficient. We'll see some examples in a second, but there are many tasks where a GUI would require many clicks around different windows. But on the CLI these tasks can be executed with a single command.

In this sense, being comfortable with the command line will help you save time and be able to execute your tasks quicker.

The second reason is that by using commands you can easily automate tasks. As previously mentioned, we can build scripts with our shell and later on execute those scripts whenever we want. This is incredibly useful when dealing with repetitive tasks that we don't want to do over and over again.

Just to give some examples, we could build a script that creates a new online repo for us, or that creates a certain infrastructure on a cloud provider for us, or that executes a simpler task like changing our screen wallpaper every hour.

Scripting is a great way to save up time with repetitive tasks.

The third reason is that sometimes the CLI will be the only way in which we'll be able to interact with a computer. Take, for example, the case when you would need to interact with a cloud platform server. In most of these cases, you won't have a GUI available, just a CLI to run commands in.

So being comfortable with the CLI will allow you to interact with computers on all ocassions.

The last reason is it looks cool and it's fun. You don't see movie hackers clicking around their computers, right? ;)

Different kinds of shells

Before diving into the actual commands you can run in your terminal, I think it's important to recognize the different types of shells out there and how to identify which shell you're currently running.

Different shells come with different syntax and different features, so to know exactly what command to enter, you first need to know what shell you're running.

A bit of history – Posix

For shells, there's a common standard called Posix.

Posix works for shells in a very similar way that ECMAScript works for JavaScript. It's a standard that dictates certain characteristics and features that all shells should comply with.

This standard was stablished in the 1980's and most current shells were developed according to that standard. That's why most shells share similar syntax and similar features.

How do I know what shell I'm running?

To know what shell you're currently running, just open your terminal and enter echo $0. This will print the current running program name, which in this case is the actual shell.

screenshot-1

What shell is better?

There's not A LOT of difference between most shells. Since most of them comply with the same standard, you'll find that most of them work similarly.

There are some slight differences you might want to know, though:

  • As mentioned, Bash is the most widely used and comes installed by default on Mac and Linux.
  • Zsh is very similar to Bash, but it was created after it and comes with some nice improvements over it. If you'd like to have more detail about its differences, here's a cool article about it.
  • Fish is another commonly used shell that comes with some nice built-in features and configurations such as autocompletion and syntax highlighting. The thing about Fish is that it's not Posix complaint, while Bash and Zsh are. This means that some of the commands you'll be able to run on Bash and Zsh won't run on Fish and viceversa. This makes Fish scripting less compatible with most computers compared to Bash and Zsh.
  • There are also other shells like Ash or Dash (the naming just makes everything more confusing, I know...) that are stripped-down versions of Posix shells. This means they only offer the features required in Posix, and nothing else. While Bash and Zsh add more features than what Posix requires.

The fact that shells add more features makes them easier and friendlier to interact with, but slower to execute scripts and commands.

So a common practice is to use this "enhanced" shells like Bash or Zsh for general interaction, and a "stripped" shell like Ash or Dash to execute scripts.

When we get to scripting later on, we'll see how we can define what shell will execute a given script.

If you're interested in a more detailed comparison between these shells, here's a video that explains it really well:

If had to recommend a shell, I would recommend bash as it's the most standard and commonly-used one. This means you'll be able to translate your knowledge into most environments.

But again, truth is there's not A LOT of difference between most shells. So in any case you can try a few and see which one you like best. ;)

A comment about customization

I just mentioned that Fish comes with built-in configuration such as autocompletion and syntax highlighting. This come built-in in Fish, but in Bash or Zsh you can configure these features, too.

The point is that shells are customizable. You can edit how the program works, what commands you have available, what information your prompt shows, and more.

We won't see customization options in detail here, but know that when you install a shell in your computer, certain files will be created on your system. Later on you can edit those files to customize your program.

Also, there are many plugins available online that allow you to customize your shell in an easier way. You just install them and get the features that plugin offers. Some examples are OhMyZsh and Starship.

These customization options are also true for Terminals.

So not only do you have many shell and terminal options to choose from – you also have many configuration options for each shell and terminal.

If you're starting out, all this information can feel a bit overwhelming. But just know that there are many options available, and each option can be customized too. That's it.

Most common and useful commands to use

Now that we have a foundation of how the CLI works, let's dive into the most useful commands you can start to use for your daily tasks.

Keep in mind that these examples will be based on my current configuration (Bash on a Linux OS). But most commands should apply to most configurations anyway.

  • Echo prints in the terminal whatever parameter we pass it.
echo Hello freeCodeCamp! // Output: Hello freeCodeCamp!
  • pwd stands for print working directory and it prints the "place" or directory we are currently at in the computer.
pwd // Output: /home/German
  • ls presents you the contents of the directory you're currently in. It will present you with both the files and other directories your current directory contains.

For example, here I'm on a React project directory I've been working on lately:

ls // Output:
node_modules  package.json  package-lock.json  public  README.md  src

If you pass this command the flag or paremter -a It will also show you hidden files or directories. Like .git or .gitignore files

ls -a // Output:
.   .env  .gitignore    package.json       public     src
..  .git  node_modules  package-lock.json  README.md
  • cd is short for Change directory and it will take you from your current directory to another.

While on my home directory, I can enter cd Desktop and it will take me to the Desktop Directory.

If I want to go up one directory, meaning go to the directory that contains the current directory, I can enter cd ..

If you enter cd alone, it will take you straight to your home directory.

  • mkdir stands for make directory and it will create a new directory for you. You have to pass the command the directory name parameter.

If I wanted to create a new directory called "Test" I would enter mkdir test.

rmdir stands for Remove directory and it does just that. It needs the directory name parameter just as mkdir: rmdir test.

touch allows you to create an empty file in your current directory. As parameters it takes the file name, like touch test.txt.

rm allows you to delete files, in the same way rmdir allows you to remove directories.
rm test.txt

cp allows you to copy files or directories. This command takes two parameters: the first one is the file or directory you want to copy, and the second one is the destination of your copy (where do you want to copy your file/directory to).

If I want to make a copy of my txt file in the same directory, I can enter the following:

cp test.txt testCopy.txt

See that the directory doesn't change, as for "destination" I enter the new name of the file.

If I wanted to copy the file into a diferent directory, but keep the same file name, I can enter this:

cp test.txt ./testFolder/

And if I wanted to copy to a different folder changing the field name, of course I can enter this:

cp test.txt ./testFolder/testCopy.txt
  • mv is short for move, and lets us move a file or directory from one place to another. That is, create it in a new directory and delete it in the previous one (same as you could do by cutting and pasting).

Again, this command takes two paremers, the file or directory we want to move and the destination.

mv test.txt ./testFolder/

We can change the name of the file too in the same command if we want to:

mv test.txt ./testFolder/testCopy.txt
  • head allows you to view the beginning of a file or piped data directly from the terminal.
head test.txt // Output:
this is the beginning of my test file
  • tail works the same but it will show you the end of the file.
tail test.txt // Output:

this is the end of my test file
  • The --help flag can be used on most commands and it will return info on how to use that given command.
cd --help // output:
cd: cd [-L|[-P [-e]] [-@]] [dir]
Change the shell working directory.

Change the current directory to DIR. The default DIR is the value of the HOME shell variable.

The variable CDPATH defines the search path for the directory containing DIR. Alternative directory names in CDPATH are separated by a colon :.

A null directory name is the same as the current directory if DIR begins with ....

  • In a similar way, the man command will return info about any particular command.
    man cp // output:

    CP(1)                            User Commands                           CP(1)

    NAME
           cp - copy files and directories

    SYNOPSIS
           cp [OPTION]... [-T] SOURCE DEST
           cp [OPTION]... SOURCE... DIRECTORY
           cp [OPTION]... -t DIRECTORY SOURCE...

    DESCRIPTION
           Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY.

           Mandatory  arguments  to  long  options are mandatory for short options
           too.

           -a, --archive
                  same as -dR --preserve=all

           --attributes-only
                  don't copy the file data, just the attributes
    ...

You can even enter man bash and that will return a huge manual about everything there's to know about this shell. ;)

  • code will open your default code editor. If you enter the command alone, it just opens the editor with the latest file/directory you opened.

You can also open a given file by passing it as parameter: code test.txt.

Or open a new file by passing the new file name: code thisIsAJsFile.js.

  • edit will open text files on your default command line text editor (which if you're on Mac or Linux will likely be either Nano or Vim).

If you open your file and then can't exit your editor, first look at this meme:

![vimExit](https://www.freecodecamp.org/news/content/images/2022/03/vimExit.png)

And then type :q! and hit enter.

The meme is funny because everyone struggles with CLI text editors at first, as most actions (like exiting the editor) are done with keyboard shortcuts. Using these editors is a whole other topic, so go look for tutorials if you're interested in learning more. ;)

ctrl+c allows you to exit the current process the terminal is running. For example, if you're creating a react app with npx create-react-app and want to cancel the build at some point, just hit ctrl+c and it will stop.

Copying text from the terminal can be done with ctrl+shift+c and pasting can be done with ctrl+shift+v

clear will clear your terminal from all previous content.

exit will close your terminal and (this is not a command but it's cool too) ctrl+alt+t will open a new terminal for you.

By pressing up and down keys you can navigate through the previous commands you entered.

By hitting tab you will get autocompletion based on the text you've written so far. By hitting tab twice you'll get suggestions based on the text you've written so far.

For example if I write edit test and tab twice, I get testFolder/ test.txt. If I write edit test. and hit tab my text autocompletes to edit test.txt

Git commands

Besides working around the file system and installing/uninstalling things, interacting with Git and online repos is probably the most common things you're going to use the terminal for as a developer.

It's a whole lot more efficient to do it from the terminal than by clicking around, so let's take a look at the most useful git commands out there.

  • git init will create a new local repository for you.
git init // output:
Initialized empty Git repository in /home/German/Desktop/testFolder/.git/

git add adds one or more files to staging. You can either detail a specific file to add to staging or add all changed files by typing git add .

git commit commits your changes to the repository. Commits must always be must be accompanied by the -m flag and commit message.

git commit -m 'This is a test commit' // output:
[master (root-commit) 6101dfe] This is a test commit
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 test.js
  • git status tells you what branch are you currently on and whether you have changes to commit or not.
git status  // output:
On branch master
nothing to commit, working tree clean
  • git clone allows you to clone (copy) a repository into the directory you're currently in. Keep in mind you can clone both remote repositories (in GitHub, GitLab, and so on) and local repositories (those that are stored in your computer).
git clone https://github.com/coccagerman/MazeGenerator.git // output:
Cloning into 'MazeGenerator'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 15 (delta 1), reused 11 (delta 0), pack-reused 0
Unpacking objects: 100% (15/15), done.
  • git remote add origin is used to detail the URL of the remote repository you're going to use for your project. In case you'd like to change it at some point, you can do it by using the command git remote set-url origin.
git remote add origin https://github.com/coccagerman/testRepo.git

Keep in mind you need to create your remote repo first in order to get its URL. We'll see how you can do this from the command line with a little script later on. ;)

  • git remote -v lets you list the current remote repository you're using.
git remote -v // output:
origin	https://github.com/coccagerman/testRepo.git (fetch)
origin	https://github.com/coccagerman/testRepo.git (push)
  • git push uploads your commited changes to your remote repo.
git push // output:
Counting objects: 2, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 266 bytes | 266.00 KiB/s, done.
Total 2 (delta 0), reused 0 (delta 0)
  • git branch lists all the available branches on your repo and tells you what branch you're currently on. If you want to create a new branch, you just have to add the new branch name as parameter like git branch <branch name>.
git branch // output:
* main
  • git checkout moves you from one branch to another. It takes your destination branch as paremeter.
git checkout newBranch // output:
Switched to branch 'newBranch'
  • git pull pulls (downloads) the code from your remote repository and combines it with your local repo. This is particularly useful when working in teams, when many developers are working on the same code base. In this case each developer periodically pulls from the remote repo in order to work in a code base that includes the changes done by all the other devs.

If there's new code in your remote repo, the command will return the actual files that were modified in the pull. If not, we get Already up to date.

git pull // output:
Already up to date.
  • git diff allows you to view the differences between the branch you're currently in and another.
git diff newBranch // output:
diff --git a/newFileInNewBranch.js b/newFileInNewBranch.js
deleted file mode 100644
index e69de29..0000000

As a side comment, when comparing differences between branches or repos, ussually visual tools like Meld are used. It's not that you can't visualize it directly in the terminal, but this tools are greate for a clearer visualization.

  • git merge merges (combines) the branch you're currently in with another. Keep in mind the changes will be incorporated only to the branch you're currently in, not to the other one.
git merge newBranch // output:
Updating f15cf51..3a3d62f
Fast-forward
 newFileInNewBranch.js | 0
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 newFileInNewBranch.js
  • git log lists all previous commits you've done in the repo.
git log // output:
commit 3a3d62fe7cea7c09403c048e971a5172459d0948 (HEAD -> main, tag: TestTag, origin/main, newBranch)
Author: German Cocca <german.cocca@avature.net>
Date:   Fri Apr 1 18:48:20 2022 -0300

    Added new file

commit f15cf515dd3ec398210108dce092debf26ff9e12
Author: German Cocca <german.cocca@avature.net>
    ...
  • The --help flag will show you information about a given command, exactly the same way it works with bash.
git diff --help // output:
GIT-DIFF(1)                       Git Manual                       GIT-DIFF(1)

NAME
       git-diff - Show changes between commits, commit and working tree, etc

SYNOPSIS
       git diff [options] [<commit>] [--] [<path>...]
       git diff [options] --cached [<commit>] [--] [<path>...]
       ...

Our first script

Now we're ready to get to the truly fun and awesome part of the command line, scripting!

As I mentioned previously, a script is nothing more than a series of commands or instructions that we can execute at any given time. To explain how we can code one, we'll use a simple example that will allow us to create a github repo by running a single command. ;)

First thing to do is create a .sh file. You can put it wherever want. I called mine newGhRepo.sh.

Then open it on your text/code editor of choice.

On our first line, we'll write the following: #! /bin/sh

This is called a shebang, and its function is to declare what shell is going to run this script.

Remember previously when we mentioned that we can use a given shell for general interaction and another given shell for executing a script? Well, the shebang is the instruction that dictates what shell runs the script.

As mentioned too, we're using a "stripped down" shell (also known as sh shells) to run the scripts as they're more efficient (though the difference might be unnoticeable to be honest, It's just a personal preference). In my computer I have dash as my sh shell.

If we wanted this script to run with bash the shebang would be #! /bin/bash

  • Our next line will be repoName=$1

Here we're declaring a variable called repoName, and assigning it to the value of the first parameter the script receives.

A parameter is a set of characters that is entered after the script/comand. Like with the cd command, we need to specify a directory parameter in order to change directory (ie: cd testFolder).

A way we can identify parameters within a script is by using dollar sign and the order in which that parameter is expected.

If I'm expecting more than one parameter I could write:

paramOne=$1
paramTwo=$2
paramThree=$3
...
  • So we're expecting the repository name as parameter of our script. But what happens if the user forgets to enter it? We need to plan for that so next we're going to code a conditional that keeps asking the user to enter the repo name until that parameter is received.

We can do that like this:

while [ -z "$repoName" ]
do
   echo 'Provide a repository name'
   read -r -p $'Repository name:' repoName
done

What we're doing here is:

  1. While the repoName variable is not assigned (while [ -z "$repoName" ])
  2. Write to the console this message (echo 'Provide a repository name')
  3. Then read whatever input the user provides and assign the input to the repoName variable (read -r -p $'Repository name:' repoName)
  • Now that we have our repo name in place, we can create our local Git repo like this:
echo "# $repoName" >> README.md
git init
git add .
git commit -m "First commit"

This is creating a readme file and writting a single line with the repo name (echo "# $repoName" >> README.md) and then initializing the git repo and making a first commit.

  • Then it's time to upload our repo to github. To do that we're going to take advantage of the github API in the following command:

curl -u coccagerman https://api.github.com/user/repos -d '{"name": "'"$repoName"'", "private":false}'

curl is a command to transfer data from or to a server, using one of the many supported protocols.

Next we're using the -u flag to declare the user we're creating the repo for (-u coccagerman).

Next comes the endpoint provided by the GitHub API (https://api.github.com/user/repos)

And last we're using the -d flag to pass parameters to this command. In this case we're indicating the repository name (for which we're using our repoName variable) and setting private option to false, since we want our repo to be puiblic.

Lots of other config options are available in the API, so check the docs for more info.

  • After running this command, GitHub will prompt us to enter our private token for authentication.

If you don't have a private token yet, you can generate it in GitHub in Settings > Developer settings > Personal access tokens

screenshot

screenshot_1

screenshot_2

  • Cool, we're almost done now! What we need now is the remote URL of our newly created GitHub repo.

To get that we're going to use curl and the GitHub API again, like this:

GIT_URL=$(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/coccagerman/"$repoName" | jq -r '.clone_url')

Here we're declaring a variable called GIT_URL and assigning it to whatever the following command returns.

The -H flag sets the header of our request.

Then we pass the GitHub API endpoint, which should contain our user name and repo name (https://api.github.com/repos/coccagerman/"$repoName").

Then we're piping the return value of our request. Piping just means passing the return value of a process as the input value of another process. We can do it with the | symbol like <process1> | <process2>.

And finally we run the jq command, which is a tool for processing JSON inputs. Here we tell it to get the value of .clone_url which is where our remote git URL will be according to the data format provided by the GitHub API.

  • And as last step, we rename our master branch to main, add the remote origin we just obtained, and push our code to GitHub! =D
git branch -M main
git remote add origin $GIT_URL
git push -u origin main

Our full script should look something like this:

#! /bin/sh
repoName=$1

while [ -z "$repoName" ]
do
    echo 'Provide a repository name'
    read -r -p $'Repository name:' repoName
done

echo "# $repoName" >> README.md
git init
git add .
git commit -m "First commit"

curl -u <yourUserName> https://api.github.com/user/repos -d '{"name": "'"$repoName"'", "private":false}'

GIT_URL=$(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/<yourUserName>/"$repoName" | jq -r '.clone_url')

git branch -M main
git remote add origin $GIT_URL
git push -u origin main
  • Now it's time to test our script! To execute it there're two things we can do.

One option is to enter the shell name and pass the file as parameter, like: dash ../ger/code/projects/scripts/newGhRepo.sh.

And the other is to make the file executable by running chmod u+x ../ger/code/projects/scripts/newGhRepo.sh.

Then you can just execute the file directly by running ../ger/code/projects/scripts/newGhRepo.sh.

And that's it! We have our script up and running. Everytime we need a new repo we can just execute this script from whatever directory we're in.

But there's something a bit annoying about this. We need to remember the exact route of the script directory. Wouldn't it be cool to execute the script with a single command that it's always the same independently of what directory we're at?

In come bash aliases to solve our problem.

Aliases are a way bash provides for making names for exact commands we want to run.

To create a new alias, we need to edit the bash configuration files in our system. This files are normally located in the home directory. Aliases can be defined in different files (mainly .bashrc or .bash_aliases).

I have a .bash_aliases file on my system, so let's edit that.

In our CLI we enter cd to go over home directory.

Then we can enter ls -a to list all files (includen hidden ones) and check if we have either a .bashrc or .bash_aliases file in our system.

We open the file with our text/code editor of choice.

And we write our new alias like this:
alias newghrepo="dash /home/German/Desktop/ger/code/projects/scripts/newGhRepo.sh"

Here I'm declaring the alias name, the actual command I'm going to enter to run the script (newghrepo).

And between quotes, define what that alias is going to do ("dash /home/German/Desktop/ger/code/projects/scripts/newGhRepo.sh")

See that I'm passing the absolute path of the script, so that this command works the same no matter what my current directory is.

If you don't know what the absolute path of your script is, go to the script directory on your terminal and enter readlink -f newGhRepo.sh. That should return the full path for you. ;)

  • After we're done editing, we save our file, restart our terminal, and voilà! Now we can run our script by just entering newghrepo, no matter in what directory we currently are. Much quicker than opening the browser and clicking around to create our repo! =D

I hope this gives you a little taste of the kind of optimizations that are possible with scripting. It certainly requires a bit more work the first time you write, test, and set up the script. But after that, you'll never have to perform that task manually again. ;)

Round up

The terminal can feel like an intimidating and intricate place when you're starting out. But it's certainly worth it to put time and effort into learning the ins and outs of it. The efficiency benefits are too good to pass up!

Original article source at https://www.freecodecamp.org

#linux #commandline #cli #console #terminal #shell

The Command Line (CLI, Console, Terminal or Shell) Explained

How to Monitor A Node for Casper Network

Casper-Monitoring

Example utilizing Prometheus and Grafana to monitor the casper-node.
NOTE: This is just an example and is not recommeneded for production useage.

Dependencies

Requires:

Setup

This will add the supplied IP Address and port as a target to prometheus. 
ie. ./setup_casper_monitoring.sh <node_ip> <node_port>

Run

Start Containers

docker-compose up -d

Teardown Containers

docker-compose down

View

Note: Assumes you are browsing from the box running the containers. Otherwise update 
localhost to an IP Address.

  • Grafana: http://localhost:3000
    • Default Username and Password: admin, admin
  • Prometheus: http://localhost:9090

Docs

Notes

There is an example of utilizing node-exporter for scraping host level metrics. 
This requires node-exporter be setup on the target host. By default it is setup on 
port 9100. The grafana dashboard will report No data for some panels if not used. 
See: https://github.com/prometheus/node_exporter for more details.

To view all metrics associated with a job, browse to http://localhost:9090/graph and search 
for the job in question.

  • i.e: {job="node"}
  • i.e: {job="node-exporter"}

Dashboard can be updated by modifying ./monitoring/grafana/dashboards/casper-dashboard.json 
and restarting the containers.

Download Details:
Author: casper-network
Source Code: https://github.com/casper-network/casper-monitoring
License:

#casper #blockchain #smartcontract #shell 

How to Monitor A Node for Casper Network

A Protocol Release for A Casper Network

casper-protocol-release

A protocol release for a casper network is a combination of a casper-node binary release and network configuration. The current scripts in casper-node-launcher packages expect these to be packaged as .tar.gz files.

In the file system a protocol version is represented with underscores _ instead of periods ..

For example: protocol version 1.0.0 will use 1_0_0 for files paths.

Network branches

Branches on the repo will be associated with networks the protocol are targeted towards. This has a disadvantage of requiring merging of CI changes to main into all network branches.

Public networks: casper and casper-test

Tagging for release

Tags for a given branch should be prefixed by the branch name.

For example:

  • Tags on the casper branch should look like casper-1.0.0 for protocol 1.0.0.
  • Tags on the casper-test branch should look like casper-test-1.1.2 for protocol 1.1.2.

Expected hosting format

When a node is setup, the casper-node-launcher package is installed. This contains configuration files for casper and casper-test networks. They are installed in /etc/casper/network_configs.

Information about these files can be found in the casper-node-launcer repo.

The two values of interest are SOURCE_URL and NETWORK_NAME. This is the url of the web server hosting files and the path to that network specific files. For example, MainNet is genesis.casperlabs.io/casper.

Within this location, there will be a protocol_versions file. This is plain text with one protocol version per line with underscore format. Such as:

1_0_0
1_1_2
1_2_1
1_3_0

An entry should exist for every protocol version that is needed to currently sync to the network. For each entry, a directory should exist with the same underscore protocol name. This will hold bin.tar.gz and config.tar.gz. As more host systems are supported, we may expand to include bin_rpm.tar.gz and others.

While not present on casper or casper-test hosting, because these network configs are included with casper-node-launcher, a different network should offer the conf file for that network in the same location as protocol_versions.

The integration-test network has an integration-test.conf hosted in the root of its staging directory. This would be pulled down directly into the network_configs directory of the node, so it could be used with commands.

cd /etc/casper/network_configs
sudo -u casper curl -JLO [url]/[network_name]/[network name].conf

Loading all protocols for a given network is simply sudo -u casper /etc/casper/node_util.py stage_protocol [conf filename].

To finish off our example, we will list a full directory tree of our exampled-test network.

example-test/
  example-test.conf
  protocol_versions
  1_0_0/
     bin.tar.gz
     config.tar.gz
  1_1_2/
     bin.tar.gz
     config.tar.gz
  1_2_1/
     bin.tar.gz
     config.tar.gz
  1_3_0/
     bin.tar.gz
     config.tar.gz

config.tar.gz

This group of files should be installed on the server in /etc/casper/[protocol_version]/. This is done as part of /etc/casper/node_util.py stage_protocols distributed with casper-node-launcher package.

This is a system agnostic configuration files for a protocol release. The starting protocol version of 1.0.0 requires accounts.toml to initialize accounts at genesis. This file should not exist for any version past the genesis 1.0.0 protocol.

chainspec.toml is the configuration for the network. This must be the same for all nodes on the network to continue with consensus. Activation point for genesis is a timestamp, otherwise it is an Era ID. Protocol version must match the version used for staging directory.

config-example.toml is the default configuration with a location to drop in the node's IP address to create a config.toml file on the server. This is done automatically with the node_util.py script distributed with casper-node-launcher packages. The big change needed for this with a new network is the known_address list which should have some or all of the genesis node IPs.

Other files may be included with config.tar.gz as needed for upgrade or additional functionality of the system. For example: global_state.toml can be used at an upgrade to modify something in global state.

Manually creating config.tar.gz

While config.tar.gz is built via tagging on this repo in CI, they could be build manually for another network. This should be archived without a directory structure.

mkdir config
cp [path of]/config.example.toml ./config
cp [path of]/chainspec.toml ./config
cd config
tar -czvf ../config.tar.gz .

This file would be hosted in the [url][network_name][underscore protocol version] directory of the staging location, where the protocol version matches that defined in the chainspec.toml file.

bin.tar.gz

The bin.tar.gz package holds a casper-node binary compiled for Ubuntu 18.04.

In addition to the appropriate casper-node binary a README.md file is included which identifies both the platform targeted, and the github source for compilation.

Note: Because of similarities to binary versions and protocol versions with MainNet casper network, it should be noted that the protocol version has no correlation to the casper-node binary version. If a new network was created, the 1.0.0 protocol should most likely use the latest viable binary version.

This file is created as part of a casper-node release and generally can be pulled directly from there for hosting.

For example: https://github.com/casper-network/casper-node/releases/tag/v1.4.5 holds bin.tar.gz as a release artifact. This would be pulled down and hosted for a network protocol.

To manually package this you could minimally:

mkdir bin
cp [path of]/casper-node ./bin
cd bin
tar -czvf ../bin.tar.gz .

Where casper-node is compiled binary of casper-node targeting 18.04 Ubuntu.

Download Details:
Author: casper-network
Source Code: https://github.com/casper-network/casper-protocol-release
License: Apache-2.0 license

#casper #blockchain #smartcontract #shell

A Protocol Release for A Casper Network
Oral  Brekke

Oral Brekke

1655127960

Vanpool-manager: Manage Registered Riders for Your Vanpool!

Vanpool Manager

Vanpool Manager is a web application for managing registered vanpool riders for each day and direction. The frontend is a Vue.js app in the web folder. The backend is a Go API. Data is persisted in a PostgreSQL database. Session state is maintained in a cookie but will be moved to Redis soon. Authentication is handled by Azure AD.

Scripts for build and deploy are in the scripts folder. See .env.tpl to see and set needed environment variables. In Azure App Service you'll need to set these as App Settings.

The scripts/deploy.sh script creates ACR, Postgres, and Web App resources in Azure and pushes the built container from this repo to the Web App.

See github.com/joshgav/azure-dapp for guidance on how to deploy this to Kubernetes or AKS.

Author: joshgav
Source Code: https://github.com/joshgav/vanpool-manager 
License: View license

#node #go #shell 

Vanpool-manager: Manage Registered Riders for Your Vanpool!
Oral  Brekke

Oral Brekke

1655113140

Podtato-head: Demo App for TAG App Delivery

Project pod tato Head

Podtato-head is a prototypical cloud-native application built to colorfully demonstrate delivery scenarios using many different tools and services. It is intended to help application delivery support teams test and decide which of these to use.

Podtato Man

The app comprises a set of microservices in podtato-head-microservices and a set of examples demonstrating how to deliver them in delivery. The services are defined with as little additional logic as possible to enable you to focus on the delivery mechanisms themselves.

Use it

Find the following set of delivery scenarios in the delivery directory. Each example scenario delivers the same end result: an API service which communicates with other API services and returns HTML composed of all their responses.

Each delivery scenario includes a walkthrough (README.md) describing how to a) install required supporting infrastructure; b) deliver podtato-head using the infrastructure; and c) test that podtato-head is operating as expected.

Each delivery scenario also includes a test (test.sh) which automates the steps described in the walkthrough.

Delivery scenarios

"Single" deployment means the action effects the state of the resources only once at the time of invocation. "GitOps" deployments mean the action checks the desired state periodically and reconciles it as needed.

The following scenarios have not yet been updated for the multi-service app:

Extend it

Here's how to extend podtato-head for your own purposes or to contribute to the shared repo.

Services

podtato-head's services themselves are written in Go; entry points are in podtato-head-microservices/cmd. The entry point to the app is defined in cmd/entry and a base for each of the app's downstream services is defined in cmd/parts.

HTTP handlers and other shared functionality is defined in podtato-head-microservices/pkg.

To run local tests on the Go code, run make podtato-head-verify.

Build

Build an image for each part - entry, hat, each arm and each leg - with make build-images.

NOTE: To apply capabilities like image scans and signatures install required binaries first by running [sudo] make install-requirements.

Publish

To test the built images you'll need to push them to a registry so that Kubernetes can find them. make push-microservices-images can do this for GitHub's container registry if you are authorized to push to the target repo (as described next).

To push to your own fork of the podtato-head repo:

Fork podtato-head if you haven't already

Create a personal access token (PAT) with write:packages permissions and copy it

Set and export env vars GITHUB_USER to your GitHub username and GITHUB_TOKEN to the PAT, for example as follows:

export GITHUB_USER=joshgav
export GITHUB_TOKEN=goobledygook

NOTE: You can also put env vars in the .env file in the repo's root; be sure not to include those updates in commits.

Test

To test the built images as running services in a cluster, run make test-services. This spins up a cluster using kind and deploys the services using the kubectl delivery scenario test.

These tests also rely on your GITHUB_USER and GITHUB_TOKEN env vars if you're using your own fork.

NOTE: The test-services tasks isn't bound to the push-images task so that they may be run separately. Make sure you run make push-images first.

More info

All delivery scenarios are expected to run on any functional Kubernetes cluster with cluster-admin access. That is, if you can run kubectl get pods -n kube-system you should be able to run any of the tests.

If you don't have a local Kubernetes cluster for tests, kind is one to consider.

NOTE: If you use a cluster without support for LoadBalancer-type services, *which is typical for test clusters like kind, you may need to replace *attributes which default to LoadBalancer with NodePort or ClusterIP.

For example:

# update type property in `service` resources
find delivery -type f -name "*.yaml" -print0 | xargs -0 sed -i 's/type: LoadBalancer/type: NodePort/g'
# update custom serviceType property in Helm values file
find delivery -type f -name "*.yaml" -print0 | xargs -0 sed -i 's/serviceType: LoadBalancer/serviceType: NodePort/g'

Contributing

See CONTRIBUTING.md.

Author: Podtato-head
Source Code: https://github.com/podtato-head/podtato-head 
License: Apache-2.0 license

#node #nodejs #shell 

Podtato-head: Demo App for TAG App Delivery
Best of Crypto

Best of Crypto

1654967880

The Chainlink Node and Elrond Adapter

INTRODUCTION

Scripts will install and run a Chainlink Node complete with Postgresql DB and the Elrond Chainlink Connector. The Chainlink Node and Elrond Adapter will run as system services that can be turned on/off independently.

REQUIREMENTS

  • Running Ubuntu 20.04 & up
  • Running the script requires a user (not root) with sudo priviledges (without password).

SCRIPT SETTINGS - MUST BE MODIFIED BEFORE FIRST RUN

  • config/variables.cfg - used to define username, home path, database options and chainlink node options.

KEY MANAGEMENT

Before installing/running be sure to add a valid owner.pem to the script PEM folder.

  • create a folder named PEM inside the scripts folder
  • add the owner.pem file inside the previously created folder

RUNNING THE SCRIPT

[FIRST RUN]

./script.sh install - installs the everything needed to run the elrond-adapter;

[START]

./script.sh start_chainlink - starts the Chainlink Node; ./script.sh start_adapter - starts the Elrond Adapter;

[STOP]

./script.sh stop_chainlink - stops the Chainlink Node; ./script.sh stop_adapter - stops the Elrond Adapter;

[CLEANUP]

./script.sh clean - #Removes installed packages;

API Examples (elrond-adapter)

HTTP POST /write endpoint

Sends transaction and writes the request data to the Elrond network

Input:

{
  "id": "bbfd3e3a8aed4d46abb0a89764951bf9",
  "data": {
    "value": "15051",
    "data": {},
    "sc_address": "erd1...",
    "function": "submit_endpoint",
    "round_id": "145"
  }
}

Output:

{
  "jobRunID": "bbfd3e3a8aed4d46abb0a89764951bf9",
  "data": {
    "result": "19feccf4b8590bcc9554ad632ff23f8344d0318fbac643bdba5fa7a605373bf4"
  },
  "result": "19feccf4b8590bcc9554ad632ff23f8344d0318fbac643bdba5fa7a605373bf4",
  "statusCode": 200
}

HTTP POST /price-job endpoint

Starts a price feed job which aggregates feeds from multiple sources and pushes data in the aggregator smart contract

Data body can be left empty, it reads input values from config.toml

Input:

{
  "id": "bbfd3e3a8aed4d46abb0a89764951bf9",
  "data": {}
}

Output:

{
  "jobRunID": "bbfd3e3a8aed4d46abb0a89764951bf9",
  "data": {
    "result": {
      "txHashes": [
        "25d1731151692cd75aa605dcad376c6acf0cd22d6fe0a1ea50a8e2cd25c16f27",
        "f95060ff47bc676f63a72cc5a51ead7ebbb1a21131d60e2273d5148a2fea3d95",
        "3a3092ba6bf49ad54afbdb2b08efa91b6b024e25753797dee675091c9b8f1891",
        "102ff3ef391cb4c53de2b9c672a98a4dca0c93da53be7255c827c60c8da029d3",
        "9c0c4c1ab8372efc21c4bbcadfc79162564e9895c91f73d942cb96be53ddd27e"
      ]
    }
  },
  "result": {
    "txHashes": [
      "25d1731151692cd75aa605dcad376c6acf0cd22d6fe0a1ea50a8e2cd25c16f27",
      "f95060ff47bc676f63a72cc5a51ead7ebbb1a21131d60e2273d5148a2fea3d95",
      "3a3092ba6bf49ad54afbdb2b08efa91b6b024e25753797dee675091c9b8f1891",
      "102ff3ef391cb4c53de2b9c672a98a4dca0c93da53be7255c827c60c8da029d3",
      "9c0c4c1ab8372efc21c4bbcadfc79162564e9895c91f73d942cb96be53ddd27e"
    ]
  },
  "statusCode": 200
}

HTTP POST /ethgas/denominate endpoint

Fetched latest eth gas prices, in gwei and denominates the value in a specified asset. e.g GWEI/EGLD

Data body can be left empty, it reads input values from config.toml

Input:

{
  "id": "bbfd3e3a8aed4d46abb0a89764951bf9",
  "data": {}
}

Output:

{
  "jobRunID": "bbfd3e3a8aed4d46abb0a89764951bf9",
  "data": {
    "result": "19feccf4b8590bcc9554ad632ff23f8344d0318fbac643bdba5fa7a605373bf4"
  },
  "result": "19feccf4b8590bcc9554ad632ff23f8344d0318fbac643bdba5fa7a605373bf4",
  "statusCode": 200
}

Download Details:
Author: ElrondNetwork
Source Code: https://github.com/ElrondNetwork/elrond-chainlink-scripts
License:

#elrond #chainlink #shell  #blockchain  #smartcontract 

The Chainlink Node and Elrond Adapter
Sheldon  Grant

Sheldon Grant

1654868668

Mongosh: The MongoDB Shell

mongosh

The MongoDB Shell

This repository is a monorepo for all the various components in the MongoDB Shell across all environments (REPL, Browser, Compass, etc).

MongoDB Shell works with MongoDB servers >= 4.0.

MongoDB Shell Example

Installation

You can get the release tarball from our Downloads Page. We currently maintain MongoDB Shell on three different platforms - Windows (zip), MacOS (zip) and Linux (tgz, deb and rpm). Once downloaded, you will have to extract the binary and add it to your PATH variable. For detailed instructions for each of our supported platforms, please visit installation documentation.

CLI Usage

  $ mongosh [options] [db address] [file names (ending in .js or .mongodb)]

  Options:

    -h, --help                                 Show this usage information
    -f, --file [arg]                           Load the specified mongosh script
        --host [arg]                           Server to connect to
        --port [arg]                           Port to connect to
        --version                              Show version information
        --verbose                              Increase the verbosity of the output of the shell
        --quiet                                Silence output from the shell during the connection process
        --shell                                Run the shell after executing files
        --nodb                                 Don't connect to mongod on startup - no 'db address' [arg] expected
        --norc                                 Will not run the '.mongoshrc.js' file on start up
        --eval [arg]                           Evaluate javascript
        --retryWrites=[true|false]             Automatically retry write operations upon transient network errors (Default: true)

  Authentication Options:

    -u, --username [arg]                       Username for authentication
    -p, --password [arg]                       Password for authentication
        --authenticationDatabase [arg]         User source (defaults to dbname)
        --authenticationMechanism [arg]        Authentication mechanism
        --awsIamSessionToken [arg]             AWS IAM Temporary Session Token ID

  TLS Options:

        --tls                                  Use TLS for all connections
        --tlsCertificateKeyFile [arg]          PEM certificate/key file for TLS
        --tlsCertificateKeyFilePassword [arg]  Password for key in PEM file for TLS
        --tlsCAFile [arg]                      Certificate Authority file for TLS
        --tlsAllowInvalidHostnames             Allow connections to servers with non-matching hostnames
        --tlsAllowInvalidCertificates          Allow connections to servers with invalid certificates
        --tlsCertificateSelector [arg]         TLS Certificate in system store (Windows and macOS only)
        --tlsCRLFile [arg]                     Specifies the .pem file that contains the Certificate Revocation List
        --tlsDisabledProtocols [arg]           Comma separated list of TLS protocols to disable [TLS1_0,TLS1_1,TLS1_2]
        --tlsUseSystemCA                       Load the operating system trusted certificate list

  API version options:

        --apiVersion [arg]                     Specifies the API version to connect with
        --apiStrict                            Use strict API version mode
        --apiDeprecationErrors                 Fail deprecated commands for the specified API version

  FLE Options:

        --awsAccessKeyId [arg]                 AWS Access Key for FLE Amazon KMS
        --awsSecretAccessKey [arg]             AWS Secret Key for FLE Amazon KMS
        --awsSessionToken [arg]                Optional AWS Session Token ID
        --keyVaultNamespace [arg]              database.collection to store encrypted FLE parameters
        --kmsURL [arg]                         Test parameter to override the URL of the KMS endpoint

  DB Address Examples:

        foo                                    Foo database on local machine
        192.168.0.5/foo                        Foo database on 192.168.0.5 machine
        192.168.0.5:9999/foo                   Foo database on 192.168.0.5 machine on port 9999
        mongodb://192.168.0.5:9999/foo         Connection string URI can also be used

  File Names:

        A list of files to run. Files must end in .js and will exit after unless --shell is specified.

  Examples:

        Start mongosh using 'ships' database on specified connection string:
        $ mongosh mongodb://192.168.0.5:9999/ships

  For more information on usage: https://docs.mongodb.com/mongodb-shell.

Local Development

Requirements

  • Node.js v14.x
  • Python 3.x
    • The test suite uses mlaunch for managing running mongod, you can install that manually as well via pip3 install mtools[mlaunch] if the automatic installation causes any trouble.

Install

npm install -g lerna
npm install -g typescript
npm run bootstrap

Running Tests

Run all tests (this may take some time):

npm test

Run tests from a specific package:

lerna run test --scope @mongosh/cli-repl

Run tests with all output from packages:

lerna run test --stream

To test against a specific version, the MONGOSH_SERVER_TEST_VERSION environment variable can be set to a semver string specifying a server version.

Starting the CLI

Via npm:

npm run start

Alternatively you can also run start inside the cli-repl package, if you're sure everything else is compiled:

cd packages/cli-repl && npm run start

Compiling

Compile all Typescript:

npm run compile-ts

Compile just the CLI:

npm run compile-cli

Compile the standalone executable (this may take some time):

npm run compile-exec

Compile a specific package, e.g. the .deb for Debian:

npm run compile-exec
npm run evergreen-release package -- --build-variant=debian-x64

Releasing

Refer to the build package documentation.

Contributing

For issues, please create a ticket in our JIRA Project.

For contributing, please refer to CONTRIBUTING.md.

Is there anything else you’d like to see in MongoDB Shell? Let us know by submitting suggestions in our feedback forum.

Evergreen Waterfall CI

For our official documentation, please visit MongoDB Docs page.

Author: Mongodb-js
Source Code: https://github.com/mongodb-js/mongosh 
License: Apache-2.0 license

#node #mongodb #react #shell 

Mongosh: The MongoDB Shell
Waylon  Bruen

Waylon Bruen

1654729140

Gobrew: Shell Script to Download & Set GO Environmental Paths

gobrew

gobrew lets you easily switch between multiple versions of go. It is based on rbenv and pyenv.

Installation


The automatic installer


You can install this via the command line with either curl or wget.

via curl

curl -L https://raw.github.com/grobins2/gobrew/master/tools/install.sh | sh

via wget

wget --no-check-certificate https://raw.github.com/grobins2/gobrew/master/tools/install.sh -O - | sh

The manual way


Check out gobrew where you want it installed.

 $ git clone git://github.com/cryptojuice/gobrew.git ~/.gobrew

Add the following to your shell config.

Note:

BASH: Add this to /.bashrc (/.bash_profile for Ubuntu users).

ZSH: Add this to ~/.zshenv

 export PATH="$HOME/.gobrew/bin:$PATH"
 eval "$(gobrew init -)"

Source your shell config file (or reopen shell session).

Commands


: gobrew install

Install a specified version of Go.

    $ gobrew install 1.5

: gobrew uninstall

    $ gobrew uninstall 1.5

: gobrew use

Sets which version of Go to use globally.

    $ gobrew use 1.5

: gobrew workspace

Note: 'gobrew workspace' echos the currently set workspace ($GOPATH). Use 'gobrew workspace set' to set your $GOPATH to the current working directory. Use 'gobrew workspace unset' to remove this setting.

    $ cd /path/to/workspace
    $ gobrew workspace set
    $ gobrew workspace unset

Visit http://golang.org/doc/code.html#Workspaces for more on workspaces.

Useful

Updates

To upgrade run update script from .gobrew source with: $ cd ~ $ ./.gobrew/tools/upgrade.sh

Uninstalling

If you want to uninstall it, just run

    $ cd ~
    $ ./.gobrew/tools/uninstall.sh

from the command line and it’ll remove itself.

Author: Cryptojuice
Source Code: https://github.com/cryptojuice/gobrew 
License: MIT license

#go #golang #shell 

Gobrew: Shell Script to Download & Set GO Environmental Paths

Test Shell Scripts While Mocking Specific Commands

jest-shell-matchers

Test shell scripts while mocking specific commands

Run shell scripts and make assertions about the exit code, stdout, stderr, and termination signal that are generated. It uses the spawn-with-mocks library, so mocks can be written for specific shell commands. 

Usage

The library exposes asynchronous matchers, so it requires Jest 23 or higher (to run synchronous tests, use spawn-with-mocks directly). Mocks are created by writing temporary files to disk, so they do not work if fs.writeFileSync is being mocked.

Initialization

const shellMatchers = require('jest-shell-matchers')

beforeAll(() => {
  // calling this will add the matchers
  // by calling expect.extend
  shellMatchers()
})

Example Without Mocks

it('should test the output from a spawned process', async () => {
  // this input will be executed by child_process.spawn
  const input = ['sh', ['./hello-world.sh']]
  const expectedOutput = {
    code: 0,
    signal: '',
    stdout: 'Hello World\n',
    stderr: '',
  }
  // the matcher is asynchronous, so it *must* be awaited
  await expect(input).toHaveMatchingSpawnOutput(expectedOutput)
})

Example With Mocks

Mocks are created by spawn-with-mocks, which documents the mocking API. In this example, we mock the date and mkdir commands:

const fs = require('fs')

it('should mock the date and mkdir commands', async () => {
  fs.writeFileSync(
    './mkdir.sh',
// this example script creates a directory
// that is named for the current date
`
#!/bin/sh
DIR_NAME=$(date +'%m-%d-%Y')
mkdir $DIR_NAME
`)

  // Mocking the output
  // for the date command
  const date = () => {
    return {
      code: 0,
      stdout: '01-06-2019',
      stderr: ''
    }
  }

  // Testing the input to mkdir,
  // and mocking the output
  const mkdir = jest.fn(dir => {
    expect(dir).toBe('01-06-2019')
    return {
      code: 0,
      stdout: '',
      stderr: ''
    }
  })

  const mocks = { date, mkdir }
  const input = ['sh', ['./mkdir.sh'], { mocks }]
  await expect(input).toHaveMatchingSpawnOutput(0)
  expect(mocks.mkdir).toHaveBeenCalledTimes(1)
  fs.unlinkSync('./mkdir.sh')
})

Mocks can also return a Number or String to shorten the code:

// The string is shorthand for stdout;
// stderr will be '' and the exit code will be 0
const date = () => '01-06-2019'

// The number is shorthand for the exit code
// stdout and stderr will be ''
const mkdir = dir => 0

API

expect([command[, args][, options]])

  • To use the matchers, call expect with the input for spawn-with-mocks#spawn, which the matchers run internally. It can execute a script, create mocks, set enviroment variables, etc. When passing args or options, the input must be wrapped with an array:
await expect('ls')
  .toHaveMatchingSpawnOutput(/*...*/)

await expect(['sh', ['./test.sh'], { mocks }])
  .toHaveMatchingSpawnOutput(/*...*/)

.toHaveMatchingSpawnOutput (expected)

  • The expected value can be a Number, String, RegExp, or Object.
const input = ['sh', ['./test.sh']]

await expect(input)
  // Number: test the exit code
  .toHaveMatchingSpawnOutput(0)

await expect(input)
  // String: test the stdout for an exact match
  .toHaveMatchingSpawnOutput('Hello World')

await expect(input)
  // RegExp: test the stdout
  .toHaveMatchingSpawnOutput(/^Hello/)

await expect(input)
  // Object: the values can be Numbers, Strings, or RegExps
  .toHaveMatchingSpawnOutput({
    // The exit code
    code: 0,
    // The signal that terminated the proces
    // for example, 'SIGTERM' or 'SIGKILL'
    signal: '',
    // The stdout from the process
    stdout: /^Hello/,
    // The stderr from the process
    stderr: ''
  })

Author: Raingerber
Source Code: https://github.com/raingerber/jest-shell-matchers 
License: MIT license

#javascript #jest #shell #testing 

Test Shell Scripts While Mocking Specific Commands

Nostromo: CLI for Building Powerful Aliases

nostromo is a CLI to rapidly build declarative aliases making multi-dimensional tools on the fly.

intro

Managing aliases can be tedious and difficult to set up. nostromo makes this process easy and reliable. The tool adds shortcuts to your .bashrc / .zshrc that call into the nostromo binary. It reads and manages all aliases within its manifest. This is used to find and execute the actual command as well as swap any substitutions to simplify calls.

nostromo can help you build complex tools in a declarative way. Tools commonly allow you to run multi-level commands like git rebase master branch or docker rmi b750fe78269d which are clear to use. Imagine if you could wrap your aliases / commands / workflow into custom commands that describe things you do often. Well, now you can with nostromo. 🤓

With nostromo you can take aliases like these:

alias ios-build='pushd $IOS_REPO_PATH;xcodebuild -workspace Foo.xcworkspace -scheme foo_scheme'
alias ios-test='pushd $IOS_REPO_PATH;xcodebuild -workspace Foo.xcworkspace -scheme foo_test_scheme'
alias android-build='pushd $ANDROID_REPO_PATH;./gradlew build'
alias android-test='pushd $ANDROID_REPO_PATH;./gradlew test'

and turn them into declarative commands like this:

build ios
build android
test ios
test android

The possibilities are endless 🚀 and up to your imagination with the ability to compose commands as you see fit.

Check out the examples folder for sample manifests with commands

sleep pod Getting Started

Prerequisites

  • Works for MacOS and bash / zsh shells (other combinations untested but may work)

Installation

Using brew:

brew tap pokanop/pokanop
brew install nostromo

Using go get:

go get -u github.com/pokanop/nostromo

Initialization

This command will initialize nostromo and create a manifest under ~/.nostromo:

nostromo init

To customize the directory (and change it from ~/.nostromo), set the NOSTROMO_HOME environment variable to a location of your choosing.

With every update, it's a good idea to run nostromo init to ensure any manifest changes are migrated and commands continue to work. nostromo will attempt to perform any migrations as well at this time to files and folders so 🤞

The quickest way to populate your commands database is using the dock feature:

nostromo dock <source>

where source can be any local or remote file sources. See the Distributed Manifests section for more details.

To destroy the core manifest and start over you can always run:

nostromo destroy

Backups of manifests are automatically taken to prevent data loss in case of shenanigans gone wrong. These are located under ${NOSTROMO_HOME}/cargo. The maximum number of backups can be configured with the backupCount manifest setting.

nostromo set backupCount 10

derelict ship Key Features

Managing Aliases

Aliases to commands is one of the core features provided by nostromo. Instead of constantly updating shell profiles manually, nostromo will automatically keep it updated with the latest additions.

Given that nostromo is not a shell command there are some things to note on how it makes its magic:

  • Commands are generated by nostromo and executed using the eval method in a shell function.
  • Commands and changes will be available immediately since nostromo reloads completions automatically

If you want create boring standard shell aliases you can do that with an additional flag or a config setting described below.

To add an alias (or command in nostromo parlance), simply run:

nostromo add cmd foo "echo bar"

And just like that you can now run foo like any other alias.

Descriptions for your commands can easily be added as well:

nostromo add cmd foo "echo bar" -d "My magical foo command that prints bar"

Your descriptions will show up in the shell when autocompleting!

Interactive Mode

You can also add commands and substitutions interactively by using just nostromo add without any arguments. This command will walk through prompts to guide adding new commands easily.

interactive

Keypaths

nostromo uses the concept of keypaths to simplify building commands and accessing the command tree. A keypath is simply a . delimited string that represents the path to the command.

For example:

nostromo add cmd foo.bar.baz 'echo hello'

will build the command tree for foo 👉 bar 👉 baz such that any of these commands are now valid (of course the first two do nothing yet 😉):

foo
foo bar
foo bar baz

where the last one will execute the echo command.

You can compose several commands together by adding commands at any node of the keypath. The default behavior is to concatenate the commands together as you walk the tree. Targeted use of ; or && can allow for running multiple commands together instead of concatenating. More easily, you can change the command mode for any of the commands to do this for you automatically. More info on this later.

Shell Aliases

nostromo allows users to manage shell aliases. By default, all commands are designed to execute the binary and resolve a command to be evaluated in the shell. This allows you to run those declarative commands easily like foo bar baz in the shell. It only creates an alias as a shell function for the root command foo and passes the remaining arguments to nostromo eval to evaluate the command tree. The result of that is executed with eval in the shell. Standard shell aliases do not get this behavior.

The use of standard shell aliases provides limited benefit if you only want single tiered aliases. Additionally, commands persist in the shell since they are evaluated (i.e., changing directories via cd).

There are two methods for adding aliases to your shell profile that are considered standard aliases:

  • Use the --alias-only or -a flag when using nostromo add cmd
  • Set the aliasesOnly config setting to affect all command additions

For example, you can see both methods here:

nostromo add cmd foo.bar.baz "cd /tmp" --alias-only

nostromo set aliasesOnly true
nostromo add cmd foo.bar.baz "cd /tmp"

Adding a standard alias will produce this line that gets sourced:

alias foo.bar.baz='cd /tmp'

instead of a nostromo command which adds a shell function:

foo() { eval $(nostromo eval foo "$*") }

Notice how the keypath has no affect in building a command tree when using the alias only feature. Standard shell aliases can only be root level commands.

Scoped Commands And Substitutions

Scope affects a tree of commands such that a parent scope is prepended first and then each command in the keypath to the root. If a command is run as follows:

foo bar baz

then the command associated with foo is concatenated first, then bar, and finally baz. So if these commands were configured like this:

nostromo add cmd foo 'echo oof'
nostromo add cmd foo.bar 'rab'
nostromo add cmd foo.bar.baz 'zab'

then the actual execution would result in:

echo oof rab zab

Standard behavior is to concatenate but you can easily change this with the mode flag when using add or globally. More information under Execution Modes.

Substitutions

nostromo also provides the ability to add substitutions at each one of these scopes in the command tree. So if you want to shorten common strings that are otherwise long into substitutions, you can attach them to a parent scope and nostromo will replace them at execution time for all instances.

A substitution can be added with:

nostromo add sub foo.bar //some/long/string sls

Subsequent calls to foo bar would replace the subs before running. This command:

foo bar baz sls

would finally result in the following since the substitution is in scope:

oof rab zab //some/long/string

Complex Command Tree

Given features like keypaths and scope you can build a complex set of commands and effectively your own tool 🤯 that performs additive functionality with each command node.

You can get a quick snapshot of the command tree using:

nostromo show

With nostromo, you can also visualize the command tree (or manifest) in several other ways including as json, yaml and a tree itself.

tree

Setting the verbose config setting prints more detailed information as well for all commands.

verbose

Execution Modes

A command's mode indicates how it will be executed. By default, nostromo concatenates parent and child commands along the tree. There are 3 modes available to commands:

  concatenate  Concatenate this command with subcommands exactly as defined  independent  Execute this command with subcommands using ';' to separate  exclusive    Execute this and only this command ignoring parent commands

The mode can be set when adding a command with the -m or --mode flag:

nostromo add cmd foo.bar.baz -m exclusive "echo baz"

A global setting can also be set to change the mode from the default concatenate with:

nostromo set mode independent

All subsequent commands would inherit the above mode if set.

Shell Completion

nostromo provides completion scripts to allow tab completion. This is added by default to your shell init file:

eval "$(nostromo completion)"

Even your commands added by nostromo get the full red carpet treatment with shell completion. Be sure to add a description and tab completion will show hints at each junction of your command. Cool right! 😎

Execute Code Snippets

nostromo provides the ability to supply code snippets in the following languages for execution, in lieu of the standard shell command:

  • ruby - runs ruby interpreter
  • python - runs python interpreter
  • js - runs node
  • perl - runs perl interpreter
nostromo add cmd foo --code 'console.log("hello js")' --language js

For more complex snippets you can edit ~/.nostromo/ships/manifest.yaml directly but multiline YAML must be escaped correctly to work.

Distributed Manifests

nostromo now supports keeping multiple manifest sources 💪 allowing you to organize and distribute your commands as you please. This feature enables synchronization functionality to get remote manifests from multiple data sources including:

  • Local Files
  • Git
  • Mercurial
  • HTTP
  • Amazon S3
  • Google GCS

Details on supported file formats and requirements can be found in the go-getter documentation as nostromo uses that for downloading files

Configs can be found in the ~/.nostromo/ships folder. The core manifest is named manifest.yaml.

You can add as many additional manifests in the same folder and nostromo will parse and aggregate all the commands, useful for organizations wanting to build their own command suite.

To add or dock manifests, use the following:

nostromo dock <source>...

And that's it! Your commands will now incorporate the new manifest.

To update docked manifests to the latest versions (omit sources to update all manifests), just run:

nostromo sync <name>...

nostromo syncs manifests using version information in the manifest. It will only update if the version identifier is different. To force update a manifest, run:

nostromo sync -f <name>...

If you're tired of someone else's manifest or it just isn't making you happy ☹️ then just undock it with:

nostromo undock <name>

Command Tree Management

Moving and copying command subtrees can be done easily using nostromo as well to avoid manual copy pasta with yaml. If you want to move command nodes around just use:

nostromo move cmd <source> <destination>

where the source and destinations are expected to be key paths like foo.bar.

You can rename a node with:

nostromo rename cmd <source> <name>

Next up, you might want to copy entire nodes around, which can also be done between manifests using copy. Again use key paths for source and destination and nostromo will attempt to replicate the branch to the new location.

nostromo copy cmd <source> <destination>

So you've created an awesome suite of commands and you like to share, am I right? Well nostromo makes it super easy to create manifests with any set of your commands from the tree using the detach command. It lets you slice and dice your manifests by extracting out a command node into a new manifest.

nostromo detach <name> <key.path>...

By default, this removes the command nodes from the manifest but can be kept intact as well with the -k option. Additionally, detaching any command nodes from a docked manifest may have unwanted side effects when running nostromo sync again since the commands will likely be added back from the original source.

Since nostromo updates manifests if the identifier is unique, there might be times you want to update the yaml files manually for whatever reason. In this case you can run the handy uuidgen command to update the identifier so you can push the manifest to others:

nostromo uuidgen <name>

Themes

nostromo now supports themes to make it look even more neat. There's 3 themes currently which can be set with:

nostromo set theme <name>

where valid themes include:

  • default: The basic theme and previous default
  • grayscale: Gray colored things are sometimes nice
  • emoji: The new default obviously

Enjoy!

🐳📑🍥🌞🍓🕖🕐💘🎵🌑🐻🐜📙💥👡🍈👝🎭🐄🌓🎏👔📁🍝🔼🕔💩🌒📥

sulaco Credits

facehugger Contributing

Contributions are what makes the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Author: Pokanop
Source Code: https://github.com/pokanop/nostromo 
License: MIT license

#go #golang #shell #bash 

Nostromo: CLI for Building Powerful Aliases

A Simple Package to Execute Shell Commands on Linux, Windows & Osx

cmd package

A simple package to execute shell commands on linux, darwin and windows.

Installation

$ go get -u github.com/commander-cli/cmd@v1.0.0

Usage

c := cmd.NewCommand("echo hello")

err := c.Execute()
if err != nil {
    panic(err.Error())    
}

fmt.Println(c.Stdout())
fmt.Println(c.Stderr())

Configure the command

To configure the command a option function will be passed which receives the command object as an argument passed by reference.

Default option functions:

  • cmd.WithStandardStreams
  • cmd.WithCustomStdout(...io.Writers)
  • cmd.WithCustomStderr(...io.Writers)
  • cmd.WithTimeout(time.Duration)
  • cmd.WithoutTimeout
  • cmd.WithWorkingDir(string)
  • cmd.WithEnvironmentVariables(cmd.EnvVars)
  • cmd.WithInheritedEnvironment(cmd.EnvVars)

Example

c := cmd.NewCommand("echo hello", cmd.WithStandardStreams)
c.Execute()

Set custom options

setWorkingDir := func (c *Command) {
    c.WorkingDir = "/tmp/test"
}

c := cmd.NewCommand("pwd", setWorkingDir)
c.Execute()

Testing

You can catch output streams to stdout and stderr with cmd.CaptureStandardOut.

// caputred is the captured output from all executed source code
// fnResult contains the result of the executed function
captured, fnResult := cmd.CaptureStandardOut(func() interface{} {
    c := NewCommand("echo hello", cmd.WithStandardStream)
    err := c.Execute()
    return err
})

// prints "hello"
fmt.Println(captured)

Development

Running tests

make test

ToDo

  • os.Stdout and os.Stderr output access after execution via c.Stdout() and c.Stderr()

Author: Commander-cli
Source Code: https://github.com/commander-cli/cmd 
License: MIT license

#go #golang #cli #windows #linux #shell 

A Simple Package to Execute Shell Commands on Linux, Windows & Osx
Archie  Clayton

Archie Clayton

1652773414

direnv: Shell Extension That Dynamically Loads .Env Per Directory

direnv -- unclutter your .profile

direnv is an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory.

Use cases

  • Load 12factor apps environment variables
  • Create per-project isolated development environments
  • Load secrets for deployment

How it works

Before each prompt, direnv checks for the existence of a .envrc file (and optionally a .env file) in the current and parent directories. If the file exists (and is authorized), it is loaded into a bash sub-shell and all exported variables are then captured by direnv and then made available to the current shell.

It supports hooks for all the common shells like bash, zsh, tcsh and fish. This allows project-specific environment variables without cluttering the ~/.profile file.

Because direnv is compiled into a single static executable, it is fast enough to be unnoticeable on each prompt. It is also language-agnostic and can be used to build solutions similar to rbenv, pyenv and phpenv.

Getting Started

Prerequisites

  • Unix-like operating system (macOS, Linux, ...)
  • A supported shell (bash, zsh, tcsh, fish, elvish)

Basic Installation

  1. direnv is packaged in most distributions already. See the installation documentation for details.
  2. hook direnv into your shell.

Now restart your shell.

Quick demo

To follow along in your shell once direnv is installed.

# Create a new folder for demo purposes.
$ mkdir ~/my-project
$ cd ~/my-project

# Show that the FOO environment variable is not loaded.
$ echo ${FOO-nope}
nope

# Create a new .envrc. This file is bash code that is going to be loaded by
# direnv.
$ echo export FOO=foo > .envrc
.envrc is not allowed

# The security mechanism didn't allow to load the .envrc. Since we trust it,
# let's allow its execution.
$ direnv allow .
direnv: reloading
direnv: loading .envrc
direnv export: +FOO

# Show that the FOO environment variable is loaded.
$ echo ${FOO-nope}
foo

# Exit the project
$ cd ..
direnv: unloading

# And now FOO is unset again
$ echo ${FOO-nope}
nope

The stdlib

Exporting variables by hand is a bit repetitive so direnv provides a set of utility functions that are made available in the context of the .envrc file.

As an example, the PATH_add function is used to expand and prepend a path to the $PATH environment variable. Instead of export PATH=$PWD/bin:$PATH you can write PATH_add bin. It's shorter and avoids a common mistake where $PATH=bin.

To find the documentation for all available functions check the direnv-stdlib(1) man page.

It's also possible to create your own extensions by creating a bash file at ~/.config/direnv/direnvrc or ~/.config/direnv/lib/*.sh. This file is loaded before your .envrc and thus allows you to make your own extensions to direnv.

Note that this functionality is not supported in .env files. If the coexistence of both is needed, one can use .envrc for leveraging stdlib and append dotenv at the end of it to instruct direnv to also read the .env file next.

Docs

Make sure to take a look at the wiki! It contains all sorts of useful information such as common recipes, editor integration, tips-and-tricks.

Man pages

FAQ

Based on GitHub issues interactions, here are the top things that have been confusing for users:

direnv has a standard library of functions, a collection of utilities that I found useful to have and accumulated over the years. You can find it here: https://github.com/direnv/direnv/blob/master/stdlib.sh

It's possible to override the stdlib with your own set of function by adding a bash file to ~/.config/direnv/direnvrc. This file is loaded and it's content made available to any .envrc file.

direnv is not loading the .envrc into the current shell. It's creating a new bash sub-process to load the stdlib, direnvrc and .envrc, and only exports the environment diff back to the original shell. This allows direnv to record the environment changes accurately and also work with all sorts of shells. It also means that aliases and functions are not exportable right now.

Contributing

Bug reports, contributions and forks are welcome. All bugs or other forms of discussion happen on http://github.com/direnv/direnv/issues .

Or drop by on Matrix to have a chat. If you ask a question make sure to stay around as not everyone is active all day.

Complementary projects

Here is a list of projects you might want to look into if you are using direnv.

Related projects

Here is a list of other projects found in the same design space. Feel free to submit new ones.

  • Environment Modules - one of the oldest (in a good way) environment-loading systems
  • autoenv - lightweight; doesn't support unloads
  • zsh-autoenv - a feature-rich mixture of autoenv and smartcd: enter/leave events, nesting, stashing (Zsh-only).
  • asdf - a pure bash solution that has a plugin system. The asdf-direnv plugin allows using asdf managed tools with direnv.
  • ondir - OnDir is a small program to automate tasks specific to certain directories
  • shadowenv - uses an s-expression format to define environment changes that should be executed

Download Details: 
Author: direnv
Source Code: https://github.com/direnv/direnv 
License: MIT
#python #shell #bash

direnv: Shell Extension That Dynamically Loads .Env Per Directory
Tyrique  Tromp

Tyrique Tromp

1652493060

How to Manage your Linux History! (Bash Shell using Ubuntu)

This is the ULTIMATE Guide to Managing your Linux History! Let's learn about Bash Shell and Linux History and the how to check your command history using Ubuntu or other Linux Distros. Learn how to reuse Linux Commands.

History files are saved by user so each bash shell user has their own history file located in their home user directory. Now this is a hidden file (.bash_history) and that’s why it contains a dot before the name of the file. Access the last set of commands used by using the history command and review your entire history by looking at the stored history file.

00:00 Linux History
01:54 Reissue History
02:45 Search History
03:43 Delete History Lines
04:17 History Shortcut
05:12 History File
07:01 History Config
08:26 Customizing Config
09:42 Clearing History

#linux  #ubuntu #bash #shell 

How to Manage your Linux History! (Bash Shell using Ubuntu)
Elian  Harber

Elian Harber

1652332260

Sh: A Shell Parser, formatter, and interpreter with Bash Support

sh

A shell parser, formatter, and interpreter. Supports POSIX Shell, Bash, and mksh. Requires Go 1.17 or later.

Quick start

To parse shell scripts, inspect them, and print them out, see the syntax examples.

For high-level operations like performing shell expansions on strings, see the shell examples.

shfmt

go install mvdan.cc/sh/v3/cmd/shfmt@latest

shfmt formats shell programs. See canonical.sh for a quick look at its default style. For example:

shfmt -l -w script.sh

For more information, see its manpage, which can be viewed directly as Markdown or rendered with scdoc.

Packages are available on Alpine, Arch, Debian, Docker, FreeBSD, Homebrew, MacPorts, NixOS, Scoop, Snapcraft, Void and webi.

gosh

go install mvdan.cc/sh/v3/cmd/gosh@latest

Proof of concept shell that uses interp. Note that it's not meant to replace a POSIX shell at the moment, and its options are intentionally minimalistic.

Fuzzing

We use Go's native fuzzing support, which requires Go 1.18 or later. For instance:

cd syntax
go test -run=- -fuzz=ParsePrint

Caveats

  • When indexing Bash associative arrays, always use quotes. The static parser will otherwise have to assume that the index is an arithmetic expression.
$ echo '${array[spaced string]}' | shfmt
1:16: not a valid arithmetic operator: string
$ echo '${array[dash-string]}' | shfmt
${array[dash - string]}
  • $(( and (( ambiguity is not supported. Backtracking would complicate the parser and make streaming support via io.Reader impossible. The POSIX spec recommends to space the operands if $( ( is meant.
$ echo '$((foo); (bar))' | shfmt
1:1: reached ) without matching $(( with ))
  • Some builtins like export and let are parsed as keywords. This allows statically building their syntax tree, as opposed to keeping the arguments as a slice of words. It is also required to support declare foo=(bar). Note that this means expansions like declare {a,b}=c are not supported.

JavaScript

A subset of the Go packages are available as an npm package called mvdan-sh. See the _js directory for more information.

Docker

To build a Docker image, checkout a specific version of the repository and run:

docker build -t my:tag -f cmd/shfmt/Dockerfile .

This creates an image that only includes shfmt. Alternatively, if you want an image that includes alpine, add --target alpine. To use the Docker image, run:

docker run --rm -u "$(id -u):$(id -g)" -v "$PWD:/mnt" -w /mnt my:tag <shfmt arguments>

Related projects

The following editor integrations wrap shfmt:

Other noteworthy integrations include:

Author: Mvdan
Source Code: https://github.com/mvdan/sh 
License: BSD-3-Clause license

#go #golang #shell #bash 

Sh: A Shell Parser, formatter, and interpreter with Bash Support