Go-sysconf: Sysconf for Go, without using Cgo

Go-sysconf

sysconf for Go, without using cgo or external binaries (e.g. getconf).

Supported operating systems: Linux, macOS, DragonflyBSD, FreeBSD, NetBSD, OpenBSD, Solaris/Illumos.

All POSIX.1 and POSIX.2 variables are supported, see References for a complete list.

Additionally, the following non-standard variables are supported on some operating systems:

VariableSupported on
SC_PHYS_PAGESLinux, macOS, FreeBSD, NetBSD, OpenBSD, Solaris/Illumos
SC_AVPHYS_PAGESLinux, OpenBSD, Solaris/Illumos
SC_NPROCESSORS_CONFLinux, macOS, FreeBSD, NetBSD, OpenBSD, Solaris/Illumos
SC_NPROCESSORS_ONLNLinux, macOS, FreeBSD, NetBSD, OpenBSD, Solaris/Illumos
SC_UIO_MAXIOVLinux

Usage

package main

import (
    "fmt"

    "github.com/tklauser/go-sysconf"
)

func main() {
    // get clock ticks, this will return the same as C.sysconf(C._SC_CLK_TCK)
    clktck, err := sysconf.Sysconf(sysconf.SC_CLK_TCK)
    if err == nil {
        fmt.Printf("SC_CLK_TCK: %v\n", clktck)
    }
}

References


Download Details:

Author: tklauser
Source Code: https://github.com/tklauser/go-sysconf 
License: BSD-3-Clause license

#go #golang #linux #unix #cgo 

Go-sysconf: Sysconf for Go, without using Cgo
Sheldon  Grant

Sheldon Grant

1676347681

Ventoy: A New Bootable USB Solution

Ventoy

Ventoy is an open source tool to create bootable USB drive for ISO/WIM/IMG/VHD(x)/EFI files. 
With ventoy, you don't need to format the disk over and over, you just need to copy the image files to the USB drive and boot it. You can copy many image files at a time and ventoy will give you a boot menu to select them. 
You can also browse ISO/WIM/IMG/VHD(x)/EFI files in local disk and boot them.
x86 Legacy BIOS, IA32 UEFI, x86_64 UEFI, ARM64 UEFI and MIPS64EL UEFI are supported in the same way.
Both MBR and GPT partition style are supported in the same way.
Most type of OS supported(Windows/WinPE/Linux/Unix/ChromeOS/Vmware/Xen...) 
1000+ ISO files are tested (List). 90%+ distros in distrowatch.com supported (Details). 
 

Tested OS

Windows
Windows 7, Windows 8, Windows 8.1, Windows 10, Windows 11, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016, Windows Server 2019, Windows Server 2022, WinPE

Linux
Debian, Ubuntu, CentOS(6/7/8/9), RHEL(6/7/8/9), Deepin, Fedora, Rocky Linux, AlmaLinux, EuroLinux(6/7/8/9), openEuler, OpenAnolis, SLES, openSUSE, MX Linux, Manjaro, Linux Mint, Endless OS, Elementary OS, Solus, Linx, Zorin, antiX, PClinuxOS, Arch, ArcoLinux, ArchLabs, BlackArch, Obarun, Artix Linux, Puppy Linux, Tails, Slax, Kali, Mageia, Slackware, Q4OS, Archman, Gentoo, Pentoo, NixOS, Kylin, openKylin, Ubuntu Kylin, KylinSec, Lubuntu, Xubuntu, Kubuntu, Ubuntu MATE, Ubuntu Budgie, Ubuntu Studio, Bluestar, OpenMandriva, ExTiX, Netrunner, ALT Linux, Nitrux, Peppermint, KDE neon, Linux Lite, Parrot OS, Qubes, Pop OS, ROSA, Void Linux, Star Linux, EndeavourOS, MakuluLinux, Voyager, Feren, ArchBang, LXLE, Knoppix, Calculate Linux, Clear Linux, Pure OS, Oracle Linux, Trident, Septor, Porteus, Devuan, GoboLinux, 4MLinux, Simplicity Linux, Zeroshell, Android-x86, netboot.xyz, Slitaz, SuperGrub2Disk, Proxmox VE, Kaspersky Rescue, SystemRescueCD, MemTest86, MemTest86+, MiniTool Partition Wizard, Parted Magic, veket, Sabayon, Scientific, alpine, ClearOS, CloneZilla, Berry Linux, Trisquel, Ataraxia Linux, Minimal Linux Live, BackBox Linux, Emmabuntüs, ESET SysRescue Live,Nova Linux, AV Linux, RoboLinux, NuTyX, IPFire, SELKS, ZStack, Enso Linux, Security Onion, Network Security Toolkit, Absolute Linux, TinyCore, Springdale Linux, Frost Linux, Shark Linux, LinuxFX, Snail Linux, Astra Linux, Namib Linux, Resilient Linux, Virage Linux, Blackweb Security OS, R-DriveImage, O-O.DiskImage, Macrium, ToOpPy LINUX, GNU Guix, YunoHost, foxclone, siduction, Adelie Linux, Elive, Pardus, CDlinux, AcademiX, Austrumi, Zenwalk, Anarchy, DuZeru, BigLinux, OpenMediaVault, Ubuntu DP, Exe GNU/Linux, 3CX Phone System, KANOTIX, Grml, Karoshi, PrimTux, ArchStrike, CAELinux, Cucumber, Fatdog, ForLEx, Hanthana, Kwort, MiniNo, Redcore, Runtu, Asianux, Clu Linux Live, Uruk, OB2D, BlueOnyx, Finnix, HamoniKR, Parabola, LinHES, LinuxConsole, BEE free, Untangle, Pearl, Thinstation, TurnKey, tuxtrans, Neptune, HefftorLinux, GeckoLinux, Mabox Linux, Zentyal, Maui, Reborn OS, SereneLinux , SkyWave Linux, Kaisen Linux, Regata OS, TROM-Jaro, DRBL Linux, Chalet OS, Chapeau, Desa OS, BlankOn, OpenMamba, Frugalware, Kibojoe Linux, Revenge OS, Tsurugi Linux, Drauger OS, Hash Linux, gNewSense, Ikki Boot, SteamOS, Hyperbola, VyOS, EasyNAS, SuperGamer, Live Raizo, Swift Linux, RebeccaBlackOS, Daphile, CRUX, Univention, Ufficio Zero, Rescuezilla, Phoenix OS, Garuda Linux, Mll, NethServer, OSGeoLive, Easy OS, Volumio, FreedomBox, paldo, UBOS, Recalbox, batocera, Lakka, LibreELEC, Pardus Topluluk, Pinguy, KolibriOS, Elastix, Arya, Omoikane, Omarine, Endian Firewall, Hamara, Rocks Cluster, MorpheusArch, Redo, Slackel, SME Server, APODIO, Smoothwall, Dragora, Linspire, Secure-K OS, Peach OSI, Photon, Plamo, SuperX, Bicom, Ploplinux, HP SPP, LliureX, Freespire, DietPi, BOSS, Webconverger, Lunar, TENS, Source Mage, RancherOS, T2, Vine, Pisi, blackPanther, mAid, Acronis, Active.Boot, AOMEI, Boot.Repair, CAINE, DaRT, EasyUEFI, R-Drive, PrimeOS, Avira Rescue System, bitdefender, Checkra1n Linux, Lenovo Diagnostics, Clover, Bliss-OS, Lenovo BIOS Update, Arcabit Rescue Disk, MiyoLinux, TeLOS, Kerio Control, RED OS, OpenWrt, MocaccinoOS, EasyStartup, Pyabr, Refracta, Eset SysRescue, Linpack Xtreme, Archcraft, NHVBOOT, pearOS, SeaTools, Easy Recovery Essentional, iKuai, StorageCraft SCRE, ZFSBootMenu, TROMjaro, BunsenLabs, Todo en Uno, ChallengerOS, Nobara, Holo, CachyOS, Peux OS, ......

Unix
DragonFly FreeBSD pfSense GhostBSD FreeNAS TrueNAS XigmaNAS FuryBSD OPNsense HardenedBSD MidnightBSD ClonOS EmergencyBootKit

ChromeOS
FydeOS, CloudReady, ChromeOS Flex

Other
VMware ESXi, Citrix XenServer, Xen XCP-ng

Subscription Service

Ventoy is an open source software under GPLv3 license. But the Ventoy project needs to pay for the server hosting, domain name, bandwidth, many USB sticks for testing, large capacity of HDD (for downloading ISO files) and so on.
For the better and sustainable development of Ventoy, I provide the 【subscription service】.

Tested Image Report

【How to report a successfully tested image file】

Ventoy Browser

With Ventoy, you can also browse ISO/WIM/IMG/VHD(x)/EFI files in local disk and boot them. Notes

VentoyPlugson

A GUI Ventoy plugin configurator. VentoyPlugson

Features

  • 100% open source
  • Simple to use
  • Fast (limited only by the speed of copying iso file)
  • Can be installed in USB/Local Disk/SSD/NVMe/SD Card
  • Directly boot from ISO/WIM/IMG/VHD(x)/EFI files, no extraction needed
  • Support to browse and boot ISO/WIM/IMG/VHD(x)/EFI files in local disk
  • No need to be continuous in disk for ISO/WIM/IMG/VHD(x)/EFI files
  • MBR and GPT partition style supported (1.0.15+)
  • x86 Legacy BIOS, IA32 UEFI, x86_64 UEFI, ARM64 UEFI, MIPS64EL UEFI supported
  • IA32/x86_64 UEFI Secure Boot supported (1.0.07+)
  • Linux Persistence supported (1.0.11+)
  • Windows auto installation supported (1.0.09+)
  • Linux auto installation supported (1.0.09+)
  • Variables Expansion supported for Windows/Linux auto installation script
  • FAT32/exFAT/NTFS/UDF/XFS/Ext2(3)(4) supported for main partition
  • ISO files larger than 4GB supported
  • Menu alias, Menu tip message supported
  • Password protect supported
  • Native boot menu style for Legacy & UEFI
  • Most types of OS supported, 1000+ iso files tested
  • Linux vDisk boot supported
  • Not only boot but also complete installation process
  • Menu dynamically switchable between List/TreeView mode
  • "Ventoy Compatible" concept
  • Plugin Framework and GUI plugin configurator
  • Injection files to runtime environment
  • Boot configuration file dynamically replacement
  • Highly customizable theme and menu
  • USB drive write-protected support
  • USB normal use unaffected
  • Data nondestructive during version upgrade
  • No need to update Ventoy when a new distro is released

avatar

Installation Instructions

See https://www.ventoy.net/en/doc_start.html for detailed instructions.

Compile Instructions

Please refer to BuildVentoyFromSource.txt

Document

TitleLink
Install & Updatehttps://www.ventoy.net/en/doc_start.html
Browse/Boot Files In Local Diskhttps://www.ventoy.net/en/doc_browser.html
Secure Boothttps://www.ventoy.net/en/doc_secure.html
Customize Themehttps://www.ventoy.net/en/plugin_theme.html
Global Controlhttps://www.ventoy.net/en/plugin_control.html
Image Listhttps://www.ventoy.net/en/plugin_imagelist.html
Auto Installationhttps://www.ventoy.net/en/plugin_autoinstall.html
Injection Pluginhttps://www.ventoy.net/en/plugin_injection.html
Persistence Supporthttps://www.ventoy.net/en/plugin_persistence.html
Boot WIM filehttps://www.ventoy.net/en/plugin_wimboot.html
Windows VHD Boothttps://www.ventoy.net/en/plugin_vhdboot.html
Linux vDisk Boothttps://www.ventoy.net/en/plugin_vtoyboot.html
DUD Pluginhttps://www.ventoy.net/en/plugin_dud.html
Password Pluginhttps://www.ventoy.net/en/plugin_password.html
Conf Replace Pluginhttps://www.ventoy.net/en/plugin_bootconf_replace.html
Menu Classhttps://www.ventoy.net/en/plugin_menuclass.html
Menu Aliashttps://www.ventoy.net/en/plugin_menualias.html
Menu Extensionhttps://www.ventoy.net/en/plugin_grubmenu.html
Memdisk Modehttps://www.ventoy.net/en/doc_memdisk.html
TreeView Modehttps://www.ventoy.net/en/doc_treeview.html
Disk Layout MBRhttps://www.ventoy.net/en/doc_disk_layout.html
Disk Layout GPThttps://www.ventoy.net/en/doc_disk_layout_gpt.html
Search Configurationhttps://www.ventoy.net/en/doc_search_path.html

FAQ

See https://www.ventoy.net/en/faq.html for detail

Forum

https://forums.ventoy.net

Official Website: https://www.ventoy.net

Download Details:

Author: Ventoy
Source Code: https://github.com/ventoy/Ventoy 
License: GPL-3.0 license

#linux #windows #unix #usb 

Ventoy: A New Bootable USB Solution
Nigel  Uys

Nigel Uys

1673929380

Master The Command Line, in one Page

The Art of Command Line

Note: I'm planning to revise this and looking for a new co-author to help with expanding this into a more comprehensive guide. While it's very popular, it could be broader and a bit deeper. If you like to write and are close to being an expert on this material and willing to consider helping, please drop me a note at josh (0x40) holloway.com. –jlevy, Holloway. Thank you!

curl -s 'https://raw.githubusercontent.com/jlevy/the-art-of-command-line/master/README.md' | egrep -o '\w+' | tr -d '`' | cowsay -W50

Fluency on the command line is a skill often neglected or considered arcane, but it improves your flexibility and productivity as an engineer in both obvious and subtle ways. This is a selection of notes and tips on using the command-line that we've found useful when working on Linux. Some tips are elementary, and some are fairly specific, sophisticated, or obscure. This page is not long, but if you can use and recall all the items here, you know a lot.

This work is the result of many authors and translators. Some of this originally appeared on Quora, but it has since moved to GitHub, where people more talented than the original author have made numerous improvements. Please submit a question if you have a question related to the command line. Please contribute if you see an error or something that could be better!

Meta

Scope:

  • This guide is for both beginners and experienced users. The goals are breadth (everything important), specificity (give concrete examples of the most common case), and brevity (avoid things that aren't essential or digressions you can easily look up elsewhere). Every tip is essential in some situation or significantly saves time over alternatives.
  • This is written for Linux, with the exception of the "macOS only" and "Windows only" sections. Many of the other items apply or can be installed on other Unices or macOS (or even Cygwin).
  • The focus is on interactive Bash, though many tips apply to other shells and to general Bash scripting.
  • It includes both "standard" Unix commands as well as ones that require special package installs -- so long as they are important enough to merit inclusion.

Notes:

  • To keep this to one page, content is implicitly included by reference. You're smart enough to look up more detail elsewhere once you know the idea or command to Google. Use apt, yum, dnf, pacman, pip or brew (as appropriate) to install new programs.
  • Use Explainshell to get a helpful breakdown of what commands, options, pipes etc. do.

Basics

Learn basic Bash. Actually, type man bash and at least skim the whole thing; it's pretty easy to follow and not that long. Alternate shells can be nice, but Bash is powerful and always available (learning only zsh, fish, etc., while tempting on your own laptop, restricts you in many situations, such as using existing servers).

Learn at least one text-based editor well. The nano editor is one of the simplest for basic editing (opening, editing, saving, searching). However, for the power user in a text terminal, there is no substitute for Vim (vi), the hard-to-learn but venerable, fast, and full-featured editor. Many people also use the classic Emacs, particularly for larger editing tasks. (Of course, any modern software developer working on an extensive project is unlikely to use only a pure text-based editor and should also be familiar with modern graphical IDEs and tools.)

Finding documentation:

  • Know how to read official documentation with man (for the inquisitive, man man lists the section numbers, e.g. 1 is "regular" commands, 5 is files/conventions, and 8 are for administration). Find man pages with apropos.
  • Know that some commands are not executables, but Bash builtins, and that you can get help on them with help and help -d. You can find out whether a command is an executable, shell builtin or an alias by using type command.
  • curl cheat.sh/command will give a brief "cheat sheet" with common examples of how to use a shell command.

Learn about redirection of output and input using > and < and pipes using |. Know > overwrites the output file and >> appends. Learn about stdout and stderr.

Learn about file glob expansion with * (and perhaps ? and [...]) and quoting and the difference between double " and single ' quotes. (See more on variable expansion below.)

Be familiar with Bash job management: &, ctrl-z, ctrl-c, jobs, fg, bg, kill, etc.

Know ssh, and the basics of passwordless authentication, via ssh-agent, ssh-add, etc.

Basic file management: ls and ls -l (in particular, learn what every column in ls -l means), less, head, tail and tail -f (or even better, less +F), ln and ln -s (learn the differences and advantages of hard versus soft links), chown, chmod, du (for a quick summary of disk usage: du -hs *). For filesystem management, df, mount, fdisk, mkfs, lsblk. Learn what an inode is (ls -i or df -i).

Basic network management: ip or ifconfig, dig, traceroute, route.

Learn and use a version control management system, such as git.

Know regular expressions well, and the various flags to grep/egrep. The -i, -o, -v, -A, -B, and -C options are worth knowing.

Learn to use apt-get, yum, dnf or pacman (depending on distro) to find and install packages. And make sure you have pip to install Python-based command-line tools (a few below are easiest to install via pip).

Everyday use

In Bash, use Tab to complete arguments or list all available commands and ctrl-r to search through command history (after pressing, type to search, press ctrl-r repeatedly to cycle through more matches, press Enter to execute the found command, or hit the right arrow to put the result in the current line to allow editing).

In Bash, use ctrl-w to delete the last word, and ctrl-u to delete the content from current cursor back to the start of the line. Use alt-b and alt-f to move by word, ctrl-a to move cursor to beginning of line, ctrl-e to move cursor to end of line, ctrl-k to kill to the end of the line, ctrl-l to clear the screen. See man readline for all the default keybindings in Bash. There are a lot. For example alt-. cycles through previous arguments, and alt-* expands a glob.

Alternatively, if you love vi-style key-bindings, use set -o vi (and set -o emacs to put it back).

For editing long commands, after setting your editor (for example export EDITOR=vim), ctrl-x ctrl-e will open the current command in an editor for multi-line editing. Or in vi style, escape-v.

To see recent commands, use history. Follow with !n (where n is the command number) to execute again. There are also many abbreviations you can use, the most useful probably being !$ for last argument and !! for last command (see "HISTORY EXPANSION" in the man page). However, these are often easily replaced with ctrl-r and alt-..

Go to your home directory with cd. Access files relative to your home directory with the ~ prefix (e.g. ~/.bashrc). In sh scripts refer to the home directory as $HOME.

To go back to the previous working directory: cd -.

If you are halfway through typing a command but change your mind, hit alt-# to add a # at the beginning and enter it as a comment (or use ctrl-a, #, enter). You can then return to it later via command history.

Use xargs (or parallel). It's very powerful. Note you can control how many items execute per line (-L) as well as parallelism (-P). If you're not sure if it'll do the right thing, use xargs echo first. Also, -I{} is handy. Examples:

      find . -name '*.py' | xargs grep some_function
      cat hosts | xargs -I{} ssh root@{} hostname

pstree -p is a helpful display of the process tree.

Use pgrep and pkill to find or signal processes by name (-f is helpful).

Know the various signals you can send processes. For example, to suspend a process, use kill -STOP [pid]. For the full list, see man 7 signal

Use nohup or disown if you want a background process to keep running forever.

Check what processes are listening via netstat -lntp or ss -plat (for TCP; add -u for UDP) or lsof -iTCP -sTCP:LISTEN -P -n (which also works on macOS).

See also lsof and fuser for open sockets and files.

See uptime or w to know how long the system has been running.

Use alias to create shortcuts for commonly used commands. For example, alias ll='ls -latr' creates a new alias ll.

Save aliases, shell settings, and functions you commonly use in ~/.bashrc, and arrange for login shells to source it. This will make your setup available in all your shell sessions.

Put the settings of environment variables as well as commands that should be executed when you login in ~/.bash_profile. Separate configuration will be needed for shells you launch from graphical environment logins and cron jobs.

Synchronize your configuration files (e.g. .bashrc and .bash_profile) among various computers with Git.

Understand that care is needed when variables and filenames include whitespace. Surround your Bash variables with quotes, e.g. "$FOO". Prefer the -0 or -print0 options to enable null characters to delimit filenames, e.g. locate -0 pattern | xargs -0 ls -al or find / -print0 -type d | xargs -0 ls -al. To iterate on filenames containing whitespace in a for loop, set your IFS to be a newline only using IFS=$'\n'.

In Bash scripts, use set -x (or the variant set -v, which logs raw input, including unexpanded variables and comments) for debugging output. Use strict modes unless you have a good reason not to: Use set -e to abort on errors (nonzero exit code). Use set -u to detect unset variable usages. Consider set -o pipefail too, to abort on errors within pipes (though read up on it more if you do, as this topic is a bit subtle). For more involved scripts, also use trap on EXIT or ERR. A useful habit is to start a script like this, which will make it detect and abort on common errors and print a message:

     set -euo pipefail
      trap "echo 'error: Script failed: see failed command above'" ERR
  • In Bash scripts, subshells (written with parentheses) are convenient ways to group commands. A common example is to temporarily move to a different working directory, e.g.
      # do something in current dir
      (cd /some/other/dir && other-command)
      # continue in original dir

In Bash, note there are lots of kinds of variable expansion. Checking a variable exists: ${name:?error message}. For example, if a Bash script requires a single argument, just write input_file=${1:?usage: $0 input_file}. Using a default value if a variable is empty: ${name:-default}. If you want to have an additional (optional) parameter added to the previous example, you can use something like output_file=${2:-logfile}. If $2 is omitted and thus empty, output_file will be set to logfile. Arithmetic expansion: i=$(( (i + 1) % 5 )). Sequences: {1..10}. Trimming of strings: ${var%suffix} and ${var#prefix}. For example if var=foo.pdf, then echo ${var%.pdf}.txt prints foo.txt.

Brace expansion using {...} can reduce having to re-type similar text and automate combinations of items. This is helpful in examples like mv foo.{txt,pdf} some-dir (which moves both files), cp somefile{,.bak} (which expands to cp somefile somefile.bak) or mkdir -p test-{a,b,c}/subtest-{1,2,3} (which expands all possible combinations and creates a directory tree). Brace expansion is performed before any other expansion.

The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and filename expansion. (For example, a range like {1..20} cannot be expressed with variables using {$a..$b}. Use seq or a for loop instead, e.g., seq $a $b or for((i=a; i<=b; i++)); do ... ; done.)

The output of a command can be treated like a file via <(some command) (known as process substitution). For example, compare local /etc/hosts with a remote one:

      diff /etc/hosts <(ssh somehost cat /etc/hosts)
  • When writing scripts you may want to put all of your code in curly braces. If the closing brace is missing, your script will be prevented from executing due to a syntax error. This makes sense when your script is going to be downloaded from the web, since it prevents partially downloaded scripts from executing:
{
      # Your code here
}
cat <<EOF
input
on multiple lines
EOF

In Bash, redirect both standard output and standard error via: some-command >logfile 2>&1 or some-command &>logfile. Often, to ensure a command does not leave an open file handle to standard input, tying it to the terminal you are in, it is also good practice to add </dev/null.

Use man ascii for a good ASCII table, with hex and decimal values. For general encoding info, man unicode, man utf-8, and man latin1 are helpful.

Use screen or tmux to multiplex the screen, especially useful on remote ssh sessions and to detach and re-attach to a session. byobu can enhance screen or tmux by providing more information and easier management. A more minimal alternative for session persistence only is dtach.

In ssh, knowing how to port tunnel with -L or -D (and occasionally -R) is useful, e.g. to access web sites from a remote server.

It can be useful to make a few optimizations to your ssh configuration; for example, this ~/.ssh/config contains settings to avoid dropped connections in certain network environments, uses compression (which is helpful with scp over low-bandwidth connections), and multiplex channels to the same server with a local control file:

      TCPKeepAlive=yes
      ServerAliveInterval=15
      ServerAliveCountMax=6
      Compression=yes
      ControlMaster auto
      ControlPath /tmp/%r@%h:%p
      ControlPersist yes

A few other options relevant to ssh are security sensitive and should be enabled with care, e.g. per subnet or host or in trusted networks: StrictHostKeyChecking=no, ForwardAgent=yes

Consider mosh an alternative to ssh that uses UDP, avoiding dropped connections and adding convenience on the road (requires server-side setup).

To get the permissions on a file in octal form, which is useful for system configuration but not available in ls and easy to bungle, use something like

      stat -c '%A %a %n' /etc/timezone

For interactive selection of values from the output of another command, use percol or fzf.

For interaction with files based on the output of another command (like git), use fpp (PathPicker).

For a simple web server for all files in the current directory (and subdirs), available to anyone on your network, use: python -m SimpleHTTPServer 7777 (for port 7777 and Python 2) and python -m http.server 7777 (for port 7777 and Python 3).

For running a command as another user, use sudo. Defaults to running as root; use -u to specify another user. Use -i to login as that user (you will be asked for your password).

For switching the shell to another user, use su username or su - username. The latter with "-" gets an environment as if another user just logged in. Omitting the username defaults to root. You will be asked for the password of the user you are switching to.

Know about the 128K limit on command lines. This "Argument list too long" error is common when wildcard matching large numbers of files. (When this happens alternatives like find and xargs may help.)

For a basic calculator (and of course access to Python in general), use the python interpreter. For example,

>>> 2+3
5

Processing files and data

To locate a file by name in the current directory, find . -iname '*something*' (or similar). To find a file anywhere by name, use locate something (but bear in mind updatedb may not have indexed recently created files).

For general searching through source or data files, there are several options more advanced or faster than grep -r, including (in rough order from older to newer) ack, ag ("the silver searcher"), and rg (ripgrep).

To convert HTML to text: lynx -dump -stdin

For Markdown, HTML, and all kinds of document conversion, try pandoc. For example, to convert a Markdown document to Word format: pandoc README.md --from markdown --to docx -o temp.docx

If you must handle XML, xmlstarlet is old but good.

For JSON, use jq. For interactive use, also see jid and jiq.

For YAML, use shyaml.

For Excel or CSV files, csvkit provides in2csv, csvcut, csvjoin, csvgrep, etc.

For Amazon S3, s3cmd is convenient and s4cmd is faster. Amazon's aws and the improved saws are essential for other AWS-related tasks.

Know about sort and uniq, including uniq's -u and -d options -- see one-liners below. See also comm.

Know about cut, paste, and join to manipulate text files. Many people use cut but forget about join.

Know about wc to count newlines (-l), characters (-m), words (-w) and bytes (-c).

Know about tee to copy from stdin to a file and also to stdout, as in ls -al | tee file.txt.

For more complex calculations, including grouping, reversing fields, and statistical calculations, consider datamash.

Know that locale affects a lot of command line tools in subtle ways, including sorting order (collation) and performance. Most Linux installations will set LANG or other locale variables to a local setting like US English. But be aware sorting will change if you change locale. And know i18n routines can make sort or other commands run many times slower. In some situations (such as the set operations or uniqueness operations below) you can safely ignore slow i18n routines entirely and use traditional byte-based sort order, using export LC_ALL=C.

You can set a specific command's environment by prefixing its invocation with the environment variable settings, as in TZ=Pacific/Fiji date.

Know basic awk and sed for simple data munging. See One-liners for examples.

To replace all occurrences of a string in place, in one or more files:

      perl -pi.bak -e 's/old-string/new-string/g' my-files-*.txt
  • To rename multiple files and/or search and replace within files, try repren. (In some cases the rename command also allows multiple renames, but be careful as its functionality is not the same on all Linux distributions.)
      # Full rename of filenames, directories, and contents foo -> bar:
      repren --full --preserve-case --from foo --to bar .
      # Recover backup files whatever.bak -> whatever:
      repren --renames --from '(.*)\.bak' --to '\1' *.bak
      # Same as above, using rename, if available:
      rename 's/\.bak$//' *.bak
  • As the man page says, rsync really is a fast and extraordinarily versatile file copying tool. It's known for synchronizing between machines but is equally useful locally. When security restrictions allow, using rsync instead of scp allows recovery of a transfer without restarting from scratch. It also is among the fastest ways to delete large numbers of files:

mkdir empty && rsync -r --delete empty/ some-dir && rmdir some-dir

For monitoring progress when processing files, use pv, pycp, pmonitor, progress, rsync --progress, or, for block-level copying, dd status=progress.

Use shuf to shuffle or select random lines from a file.

Know sort's options. For numbers, use -n, or -h for handling human-readable numbers (e.g. from du -h). Know how keys work (-t and -k). In particular, watch out that you need to write -k1,1 to sort by only the first field; -k1 means sort according to the whole line. Stable sort (sort -s) can be useful. For example, to sort first by field 2, then secondarily by field 1, you can use sort -k1,1 | sort -s -k2,2.

If you ever need to write a tab literal in a command line in Bash (e.g. for the -t argument to sort), press ctrl-v [Tab] or write $'\t' (the latter is better as you can copy/paste it).

The standard tools for patching source code are diff and patch. See also diffstat for summary statistics of a diff and sdiff for a side-by-side diff. Note diff -r works for entire directories. Use diff -r tree1 tree2 | diffstat for a summary of changes. Use vimdiff to compare and edit files.

For binary files, use hd, hexdump or xxd for simple hex dumps and bvi, hexedit or biew for binary editing.

Also for binary files, strings (plus grep, etc.) lets you find bits of text.

For binary diffs (delta compression), use xdelta3.

To convert text encodings, try iconv. Or uconv for more advanced use; it supports some advanced Unicode things. For example:

      # Displays hex codes or actual names of characters (useful for debugging):
      uconv -f utf-8 -t utf-8 -x '::Any-Hex;' < input.txt
      uconv -f utf-8 -t utf-8 -x '::Any-Name;' < input.txt
      # Lowercase and removes all accents (by expanding and dropping them):
      uconv -f utf-8 -t utf-8 -x '::Any-Lower; ::Any-NFD; [:Nonspacing Mark:] >; ::Any-NFC;' < input.txt > output.txt

To split files into pieces, see split (to split by size) and csplit (to split by a pattern).

Date and time: To get the current date and time in the helpful ISO 8601 format, use date -u +"%Y-%m-%dT%H:%M:%SZ" (other options are problematic). To manipulate date and time expressions, use dateadd, datediff, strptime etc. from dateutils.

Use zless, zmore, zcat, and zgrep to operate on compressed files.

File attributes are settable via chattr and offer a lower-level alternative to file permissions. For example, to protect against accidental file deletion the immutable flag: sudo chattr +i /critical/directory/or/file

Use getfacl and setfacl to save and restore file permissions. For example:

   getfacl -R /some/path > permissions.txt
   setfacl --restore=permissions.txt
  • To create empty files quickly, use truncate (creates sparse file), fallocate (ext4, xfs, btrfs and ocfs2 filesystems), xfs_mkfile (almost any filesystems, comes in xfsprogs package), mkfile (for Unix-like systems like Solaris, Mac OS).

System debugging

For web debugging, curl and curl -I are handy, or their wget equivalents, or the more modern httpie.

To know current cpu/disk status, the classic tools are top (or the better htop), iostat, and iotop. Use iostat -mxz 15 for basic CPU and detailed per-partition disk stats and performance insight.

For network connection details, use netstat and ss.

For a quick overview of what's happening on a system, dstat is especially useful. For broadest overview with details, use glances.

To know memory status, run and understand the output of free and vmstat. In particular, be aware the "cached" value is memory held by the Linux kernel as file cache, so effectively counts toward the "free" value.

Java system debugging is a different kettle of fish, but a simple trick on Oracle's and some other JVMs is that you can run kill -3 <pid> and a full stack trace and heap summary (including generational garbage collection details, which can be highly informative) will be dumped to stderr/logs. The JDK's jps, jstat, jstack, jmap are useful. SJK tools are more advanced.

Use mtr as a better traceroute, to identify network issues.

For looking at why a disk is full, ncdu saves time over the usual commands like du -sh *.

To find which socket or process is using bandwidth, try iftop or nethogs.

The ab tool (comes with Apache) is helpful for quick-and-dirty checking of web server performance. For more complex load testing, try siege.

For more serious network debugging, wireshark, tshark, or ngrep.

Know about strace and ltrace. These can be helpful if a program is failing, hanging, or crashing, and you don't know why, or if you want to get a general idea of performance. Note the profiling option (-c), and the ability to attach to a running process (-p). Use trace child option (-f) to avoid missing important calls.

Know about ldd to check shared libraries etc — but never run it on untrusted files.

Know how to connect to a running process with gdb and get its stack traces.

Use /proc. It's amazingly helpful sometimes when debugging live problems. Examples: /proc/cpuinfo, /proc/meminfo, /proc/cmdline, /proc/xxx/cwd, /proc/xxx/exe, /proc/xxx/fd/, /proc/xxx/smaps (where xxx is the process id or pid).

When debugging why something went wrong in the past, sar can be very helpful. It shows historic statistics on CPU, memory, network, etc.

For deeper systems and performance analyses, look at stap (SystemTap), perf, and sysdig.

Check what OS you're on with uname or uname -a (general Unix/kernel info) or lsb_release -a (Linux distro info).

Use dmesg whenever something's acting really funny (it could be hardware or driver issues).

If you delete a file and it doesn't free up expected disk space as reported by du, check whether the file is in use by a process: lsof | grep deleted | grep "filename-of-my-big-file"

One-liners

A few examples of piecing together commands:

  • It is remarkably helpful sometimes that you can do set intersection, union, and difference of text files via sort/uniq. Suppose a and b are text files that are already uniqued. This is fast, and works on files of arbitrary size, up to many gigabytes. (Sort is not limited by memory, though you may need to use the -T option if /tmp is on a small root partition.) See also the note about LC_ALL above and sort's -u option (left out for clarity below).
      sort a b | uniq > c   # c is a union b
      sort a b | uniq -d > c   # c is a intersect b
      sort a b b | uniq -u > c   # c is set difference a - b
  • Pretty-print two JSON files, normalizing their syntax, then coloring and paginating the result:
      diff <(jq --sort-keys . < file1.json) <(jq --sort-keys . < file2.json) | colordiff | less -R

Use grep . * to quickly examine the contents of all files in a directory (so each line is paired with the filename), or head -100 * (so each file has a heading). This can be useful for directories filled with config settings like those in /sys, /proc, /etc.

Summing all numbers in the third column of a text file (this is probably 3X faster and 3X less code than equivalent Python):

     awk '{ x += $3 } END { print x }' myfile
  • To see sizes/dates on a tree of files, this is like a recursive ls -l but is easier to read than ls -lR:
     find . -type f -ls
  • Say you have a text file, like a web server log, and a certain value that appears on some lines, such as an acct_id parameter that is present in the URL. If you want a tally of how many requests for each acct_id:
      egrep -o 'acct_id=[0-9]+' access.log | cut -d= -f2 | sort | uniq -c | sort -rn

To continuously monitor changes, use watch, e.g. check changes to files in a directory with watch -d -n 2 'ls -rtlh | tail' or to network settings while troubleshooting your wifi settings with watch -d -n 2 ifconfig.

Run this function to get a random tip from this document (parses Markdown and extracts an item):

      function taocl() {
        curl -s https://raw.githubusercontent.com/jlevy/the-art-of-command-line/master/README.md |
          sed '/cowsay[.]png/d' |
          pandoc -f markdown -t html |
          xmlstarlet fo --html --dropdtd |
          xmlstarlet sel -t -v "(html/body/ul/li[count(p)>0])[$RANDOM mod last()+1]" |
          xmlstarlet unesc | fmt -80 | iconv -t US
      }

Obscure but useful

expr: perform arithmetic or boolean operations or evaluate regular expressions

m4: simple macro processor

yes: print a string a lot

cal: nice calendar

env: run a command (useful in scripts)

printenv: print out environment variables (useful in debugging and scripts)

look: find English words (or lines in a file) beginning with a string

cut, paste and join: data manipulation

fmt: format text paragraphs

pr: format text into pages/columns

fold: wrap lines of text

column: format text fields into aligned, fixed-width columns or tables

expand and unexpand: convert between tabs and spaces

nl: add line numbers

seq: print numbers

bc: calculator

factor: factor integers

gpg: encrypt and sign files

toe: table of terminfo entries

nc: network debugging and data transfer

socat: socket relay and tcp port forwarder (similar to netcat)

slurm: network traffic visualization

dd: moving data between files or devices

file: identify type of a file

tree: display directories and subdirectories as a nesting tree; like ls but recursive

stat: file info

time: execute and time a command

timeout: execute a command for specified amount of time and stop the process when the specified amount of time completes.

lockfile: create semaphore file that can only be removed by rm -f

logrotate: rotate, compress and mail logs.

watch: run a command repeatedly, showing results and/or highlighting changes

when-changed: runs any command you specify whenever it sees file changed. See inotifywait and entr as well.

tac: print files in reverse

comm: compare sorted files line by line

strings: extract text from binary files

tr: character translation or manipulation

iconv or uconv: conversion for text encodings

split and csplit: splitting files

sponge: read all input before writing it, useful for reading from then writing to the same file, e.g., grep -v something some-file | sponge some-file

units: unit conversions and calculations; converts furlongs per fortnight to twips per blink (see also /usr/share/units/definitions.units)

apg: generates random passwords

xz: high-ratio file compression

ldd: dynamic library info

nm: symbols from object files

ab or wrk: benchmarking web servers

strace: system call debugging

mtr: better traceroute for network debugging

cssh: visual concurrent shell

rsync: sync files and folders over SSH or in local file system

wireshark and tshark: packet capture and network debugging

ngrep: grep for the network layer

host and dig: DNS lookups

lsof: process file descriptor and socket info

dstat: useful system stats

glances: high level, multi-subsystem overview

iostat: Disk usage stats

mpstat: CPU usage stats

vmstat: Memory usage stats

htop: improved version of top

last: login history

w: who's logged on

id: user/group identity info

sar: historic system stats

iftop or nethogs: network utilization by socket or process

ss: socket statistics

dmesg: boot and system error messages

sysctl: view and configure Linux kernel parameters at run time

hdparm: SATA/ATA disk manipulation/performance

lsblk: list block devices: a tree view of your disks and disk partitions

lshw, lscpu, lspci, lsusb, dmidecode: hardware information, including CPU, BIOS, RAID, graphics, devices, etc.

lsmod and modinfo: List and show details of kernel modules.

fortune, ddate, and sl: um, well, it depends on whether you consider steam locomotives and Zippy quotations "useful"

macOS only

These are items relevant only on macOS.

Package management with brew (Homebrew) and/or port (MacPorts). These can be used to install on macOS many of the above commands.

Copy output of any command to a desktop app with pbcopy and paste input from one with pbpaste.

To enable the Option key in macOS Terminal as an alt key (such as used in the commands above like alt-b, alt-f, etc.), open Preferences -> Profiles -> Keyboard and select "Use Option as Meta key".

To open a file with a desktop app, use open or open -a /Applications/Whatever.app.

Spotlight: Search files with mdfind and list metadata (such as photo EXIF info) with mdls.

Be aware macOS is based on BSD Unix, and many commands (for example ps, ls, tail, awk, sed) have many subtle variations from Linux, which is largely influenced by System V-style Unix and GNU tools. You can often tell the difference by noting a man page has the heading "BSD General Commands Manual." In some cases GNU versions can be installed, too (such as gawk and gsed for GNU awk and sed). If writing cross-platform Bash scripts, avoid such commands (for example, consider Python or perl) or test carefully.

To get macOS release information, use sw_vers.

Windows only

These items are relevant only on Windows.

Ways to obtain Unix tools under Windows

Access the power of the Unix shell under Microsoft Windows by installing Cygwin. Most of the things described in this document will work out of the box.

On Windows 10, you can use Windows Subsystem for Linux (WSL), which provides a familiar Bash environment with Unix command line utilities.

If you mainly want to use GNU developer tools (such as GCC) on Windows, consider MinGW and its MSYS package, which provides utilities such as bash, gawk, make and grep. MSYS doesn't have all the features compared to Cygwin. MinGW is particularly useful for creating native Windows ports of Unix tools.

Another option to get Unix look and feel under Windows is Cash. Note that only very few Unix commands and command-line options are available in this environment.

Useful Windows command-line tools

You can perform and script most Windows system administration tasks from the command line by learning and using wmic.

Native command-line Windows networking tools you may find useful include ping, ipconfig, tracert, and netstat.

You can perform many useful Windows tasks by invoking the Rundll32 command.

Cygwin tips and tricks

Install additional Unix programs with the Cygwin's package manager.

Use mintty as your command-line window.

Access the Windows clipboard through /dev/clipboard.

Run cygstart to open an arbitrary file through its registered application.

Access the Windows registry with regtool.

Note that a C:\ Windows drive path becomes /cygdrive/c under Cygwin, and that Cygwin's / appears under C:\cygwin on Windows. Convert between Cygwin and Windows-style file paths with cygpath. This is most useful in scripts that invoke Windows programs.

More resources

Disclaimer

With the exception of very small tasks, code is written so others can read it. With power comes responsibility. The fact you can do something in Bash doesn't necessarily mean you should! ;)

🌍 ČeštinaDeutschΕλληνικάEnglishEspañolFrançaisIndonesiaItaliano日本語한국어polskiPortuguêsRomânăРусскийSlovenščinaУкраїнська简体中文繁體中文

Download Details:

Author: jlevy
Source Code: https://github.com/jlevy/the-art-of-command-line 
License: Attribution-ShareAlike 4.0 International

#windows #macos #linux #bash #documentation #unix 

Master The Command Line, in one Page

Socket: Non-blocking socket and TLS functionality for PHP based on Amp

Socket

amphp/socket is a socket library for establishing and encrypting non-blocking sockets PHP based on Amp.

Installation

This package can be installed as a Composer dependency.

composer require amphp/socket

Documentation

Documentation can be found on amphp.org as well as in the ./docs directory.

Examples

You can find more examples in the ./examples directory.

Client Example

<?php // basic (and dumb) HTTP client

require __DIR__ . '/../vendor/autoload.php';

// This is a very simple HTTP client that just prints the response without parsing.
// league/uri-schemes required for this example.

use Amp\ByteStream;
use Amp\Loop;
use Amp\Socket\ClientTlsContext;
use Amp\Socket\ConnectContext;
use Amp\Socket\EncryptableSocket;
use League\Uri;
use function Amp\Socket\connect;

Loop::run(static function () use ($argv) {
    $stdout = ByteStream\getStdout();

    if (\count($argv) !== 2) {
        yield $stdout->write('Usage: examples/simple-http-client.php <url>' . PHP_EOL);
        exit(1);
    }

    $uri = Uri\Http::createFromString($argv[1]);
    $host = $uri->getHost();
    $port = $uri->getPort() ?? ($uri->getScheme() === 'https' ? 443 : 80);
    $path = $uri->getPath() ?: '/';

    $connectContext = (new ConnectContext)
        ->withTlsContext(new ClientTlsContext($host));

    /** @var EncryptableSocket $socket */
    $socket = yield connect($host . ':' . $port, $connectContext);

    if ($uri->getScheme() === 'https') {
        yield $socket->setupTls();
    }

    yield $socket->write("GET {$path} HTTP/1.1\r\nHost: $host\r\nConnection: close\r\n\r\n");

    while (null !== $chunk = yield $socket->read()) {
        yield $stdout->write($chunk);
    }

    // If the promise returned from `read()` resolves to `null`, the socket closed and we're done.
    // In this case you can also use `yield Amp\ByteStream\pipe($socket, $stdout)` instead,
    // but we want to demonstrate the `read()` method here.
});

Server Example

<?php // basic (and dumb) HTTP server

require __DIR__ . '/../vendor/autoload.php';

// This is a very simple HTTP server that just prints a message to each client that connects.
// It doesn't check whether the client sent an HTTP request.

// You might notice that your browser opens several connections instead of just one,
// even when only making one request.

use Amp\Loop;
use Amp\Socket\ResourceSocket;
use Amp\Socket\Server;
use function Amp\asyncCoroutine;

Loop::run(static function () {
    $clientHandler = asyncCoroutine(static function (ResourceSocket $socket) {
        $address = $socket->getRemoteAddress();
        $ip = $address->getHost();
        $port = $address->getPort();

        echo "Accepted connection from {$address}." . PHP_EOL;

        $body = "Hey, your IP is {$ip} and your local port used is {$port}.";
        $bodyLength = \strlen($body);

        $req = "HTTP/1.1 200 OK\r\nConnection: close\r\nContent-Length: {$bodyLength}\r\n\r\n{$body}";
        yield $socket->end($req);
    });

    $server = Server::listen('127.0.0.1:0');

    echo 'Listening for new connections on ' . $server->getAddress() . ' ...' . PHP_EOL;
    echo 'Open your browser and visit http://' . $server->getAddress() . '/' . PHP_EOL;

    while ($socket = yield $server->accept()) {
        $clientHandler($socket);
    }
});

Security

If you discover any security related issues, please email me@kelunik.com instead of using the issue tracker.

Download Details:

Author: Amphp
Source Code: https://github.com/amphp/socket 
License: MIT license

#php #client #socket #unix 

Socket: Non-blocking socket and TLS functionality for PHP based on Amp
Nat  Grady

Nat Grady

1667468160

GWA_tutorial: A Comprehensive Tutorial About GWAS and PRS

GWA tutorial

This GitHub repository provides several tutorials about techniques used to analyze genetic data. Underneath this README we have provided a step-by-step guide to help researchers without experience in Unix to complete these tutorials succesfully. For reseachers familiar with Unix this README will probably be sufficient.

We have made scripts available for:

  • All essential GWAS QC steps along with scripts for data visualization.
  • Dealing with population stratification, using 1000 genomes as a reference.
  • Association analyses of GWAS data.
  • Polygenic risk score (PRS) analyses.

The scripts downloadable from this GitHub page can be seen purely as tutorials and used for educational purposes, but can also be used as a template for analyzing your own data. All scripts/tutorials from this GitHub page use freely downloadable data, commands to download the necessary data can be found in the scripts.

Content:

  • 1_QC_GWAS.zip
  • 2_Population_stratification.zip
  • 3_Association_GWAS
  • 4_ PRS.doc

How to use the tutorials on this page: The tutorials are designed to run on an UNIX/Linux computer/server. The first 3 tutorials contain both *.text as *.R scripts. The main scripts for performing these tutorials are the *.text scripts (respectively for the first 3 tutorials: 1_Main_script_QC_GWAS, 2_Main_script_MDS.txt, and 3_Main_script_association_GWAS.txt). These script will execute the *.R scripts, when those are placed in the same directory. Note, without placing all files belonging to a specific tutorial in the same directory the tutorials cannot be completed. Furthermore, the first 3 tutorials are not independent; they should be followed in the order given above, according to their number. For example, the files generated at the end of tutorial 1 are essential in performing tutorial 2. Therefore, those files should be moved/copied to the directory in which tutorial 2 is executed. In addition, the files from tutorial 2 are essential for tutorial 3. The fourth tutorial (4_ PRS.doc) is a MS Word document, and runs independently of the previous 3 tutorials.

All scripts are developed for UNIX/Linux computer resources, and all commands should be typed/pasted at the shell prompt.

Note: The *.zip files contain multiple files, in order to successfully complete the tutorials it is essential to download all files from the *.zip files and upload them to your working directory. To pull all tutorials to your computer simply use the following command: git clone https://github.com/MareesAT/GWA_tutorial.git . Alternatively, you can manually open the *.zip folders and PRS.doc file by clicking on the folder/file followed by clicking on "View Raw".

Contact:

Please email Andries Marees (a.t.marees@vu.nl) for questions

Additional material

Once you completed the current tutorial we recommend you to visit https://github.com/AngelaMinaVargas/eMAGMA-tutorial This Github repository guides the steps to use eMAGMA.
eMAGMA is a post-GWAS analysis, that conducts eQTL informed gene-based tests by assigning SNPs to tissue-specific eGenes.


Step-by-step-guide for this tutorial

Step-by-step-guide for researches new to Unix and/or genetic analyses.

Introduction

The tutorial consist of four separate parts. The first three are dependent of each other and can only be performed in consecutive order, starting from the first (1_QC_GWAS.zip), then the second (2_Population_stratification.zip, followed by the third (3_Association_GWAS). The fourth part (4_ PRS.doc) can be performed independently.

The Unix commands provided in this guide should be typed/copy-and-pasted after the prompt ($ or >) on your Unix machine. Note, the ">" in front of the commands should not be copy-and-pasted. Only what comes after the ">".

We assume that you have read the accompanying article "A tutorial on conducting Genome-Wide-Association Studies: Quality control and statistical analysis " (https://www.ncbi.nlm.nih.gov/pubmed/29484742), which should provide you with a basic theoretical understanding of the type of analyses covered in this tutorial.

This step-by-step guide serves researchers who have none or very little experience with Unix, by helping them through the Unix commands in preparation of the tutorial.

Preparation

Step 1) The current set of tutorials on this GitHub page are based on a GNU/Linux-based computer, therefore:

  • Make sure you have access to a GNU/Linux-based computer resource.
  • Create a directory where you plan to conduct the analysis.

Execute the command below (copy-and-paste without the prompt: > and without the {} ).

mkdir {name_for_your_directory}


 

Step 2) Download the files from the GitHub page

Change the directory of your Unix machine to the created directory from step 1.

Execute the command below

cd HOME/{user}/{path/name_for_your_directory}
git clone https://github.com/MareesAT/GWA_tutorial.git

Unzip the folder of the first tutorial and move into the newly created directory.

Execute the commands below

unzip 1_QC_GWAS.zip cd 1_QC_GWAS


 

Step 3) This tutorial requires the open-source programming language R and the open-source whole genome association analysis toolset PLINK version 1.07 (all commands also work with PLINK2). If these programs are not already installed on your computer they can be downloaded from respectively: https://www.r-project.org/ http://zzz.bwh.harvard.edu/plink/ https://www.cog-genomics.org/plink2

We recommend using the newest versions. These websites will guide you through the installation process.

Congratulations everything is set up to start the tutorial!

Execution of tutorial 1

Step 4) Once you've created a directory in which you have downloaded and unzipped the folder: 1_QC_GWAS.zip, you are ready to start the first part of the actual tutorial. All steps of this tutorial will be excecuted using the commands from the main script: 1_Main_script_QC_GWAS.txt, the only thing necessary in completing the tutorial is copy-and-paste the commands from the main script at the prompt of your Unix device. Note, make sure you are in the directory containing all files, which is the directory after the last command of step 2. There is no need to open the other files manually.

There are two ways to use the main script:

Option 1

  • If you are a novice user, we recommend opening 1_Main_script_QC_GWAS.txt in WordPad or Notepad on your Windows computer.

Option 2

Alternatively, 1_Main_script_QC_GWAS.txt can be opened using an Unix text editor, for example vi.

Open the main script with vi :

vi 1_Main_script_QC_GWAS.txt

This enables you to read the script within the Unix environment and copy the command lines from it.

To exit vi and return to your directory use:

:q

From there, using either option 1 or 2, you can read the information given at every step of script “, 1_Main_script_QC_GWAS.txt” and copy-paste the commands after the prompt on your Unix machine.

Note, if R or PLINK are installed in a directory other than your working directory please specify the path to the excecutables in the given script. Alternatively, you can copy the executables of the programs to your working directory. For example, by using: cp {path/program name} {path/directiory}. However, when using a cluster computer, commands such a "module load plink", and "module load R" will suffice, regardless of directory.

For more information of using R and PLINK in a Unix/Linux environment we refer to: http://zzz.bwh.harvard.edu/plink/download.shtml#nixs

Execution of tutorial 2&3

Unzip the tutorial folder of choice as described in step 2.

Use the output file from the last tutorial as input for the tutorial you want to start.

The command below can be used to copy the file to another directory

cp {path/directory/file} {path/directory}

Use 2_Main_script_MDS.txt for the second tutorial and 3_Main_script_association_GWAS.txt for the third tutorial.

Execution of tutorial 4

4_ PRS.doc works independently from the other tutorials. After downloading 4_ PRS.doc, you can run the script, without the need for unzipping, in a directory of choice.

Download Details:

Author: MareesAT
Source Code: https://github.com/MareesAT/GWA_tutorial 

#r #unix 

GWA_tutorial: A Comprehensive Tutorial About GWAS and PRS
Poppy Cooke

Poppy Cooke

1667221881

Unix Tutorial for Beginners

This tutorial will teach you all about various Unix / Linux commands, processes, scripts along with Unix architecture. Unix knowledge required for Software Testers - Manual and Automate the various UNIX / LINUX processes.

Unix Knowledge required for Software Testers - Manual and Automate the various UNIX / LINUX processes, So that you can achieve end to end test automation (If you have any Unix processes in your application).

Also this course mainly covers about automation of various Unix processes like executing shell scripts / sending or receiving files to / from  Unix or Linux server, so that you can incorporate this in your test automation framework and achieve end to end test automation.

What you’ll learn

  •        All Manual Unix Concepts required for Software Testers
  •        How to Automate the various Unix Processes, so as to achieve end to end test automation.
  •        Learn about Putty, winScp
  •        Learn about Java SSH library (Jsch) - To do automation of Unix processes

Are there any course requirements or prerequisites?

  •        Good to have Basic Java knowledge

Who this course is for:

  •        All Manual Testers
  •        Automation testers who wish to learn how to automate Unix processes
  •        Anyone who wish to start there career as software tester

#unix #linux

Unix Tutorial for Beginners
Felix Kling

Felix Kling

1665460361

Linux (Unix) Tutorial for Software QA Testers

Beginners course on UNIX / Linux training for SOFTWARE QA TESTERS, developers, programmers. Linux commands, vi editor,ftp commands, shell script, stop, shutdown web, app servers for software QA Testing training

This course is designed for Software QA Testers to execute common commands like ps, grep, find, and how to start and shutdown web servers and app servers. How to use VI editor and ftp commands. Brief idea about shell script, how to write if condition and for loop. How to execute shell script and more.

As a Software QA Tester if you find a defect, don't  go to developer without detail information about the issue or do not create defect / bug without detailed information. Software QA Testers need to do root cause analysis to find the error messages from UNIX / Linux server where the application is running.

Please check the log files, do some root cause analysis before you create a defect or before you talk to developer.

As a QA Tester you need to learn how to check the log files, restart app and web servers etc.

At least understand basics of shell script if you need to modify the existing script, you need to have basic idea. Software QA Tester need to know vi editor to create or modify files.

This Linux / Unix course will help the student to learn all these.

UNIX / Linux Training for Software QA Tester is Most Practical, simple and Inexpensive Course.

It is included most of the information to handle UNIX / Linux as QA / Quality assurance Software Tester.

This software testing QA training ( Linux / UNIX training for Software QA Testers course is designed by working professionals to train the student from the basics of Linux / Unix to check the log files, how to start and shutdown the server, how to find the files, search files using grep command , vi editor , file permissions and how to execute shell script etc.

Who is the target audience for this course?

  •    UNIX / Linux Operating system is most used platform to run Web servers, app servers and databases. so if you are Quality assurance QA Software Tester / test analyst, Junior Developer,  Test Consultants, Designer,Test Leads, Test Managers, QA leads and Managers, Business Analysts, QA Engineers, Fresh Graduates, Students who are interested to know about Linux . UNIX operating system can enroll into this course.
  •    If you are an experienced Software QA tester, but you don't know how to handle UNIX / Linux then enroll into this course.

What you’ll learn

  •    At the end of my course students are able to connect to Linux, how to execute common Linux commands, how to start and stop servers in Linux, how to use VI editor and more.

Are there any course requirements or prerequisites?

  •    Student should already have some knowledge in QA Testing.
  •    General knowledge about Operating system

Who this course is for:

  •    QA Testers
  •    Who are interested to learn basic knowledge about UNIX / Linux commands

#unix #linux #testing

Linux (Unix) Tutorial for Software QA Testers
Louis Jones

Louis Jones

1664777493

Unix Commands for Beginners

Learn about Unix Commands from basic to advanced with examples in this tutorial for beginners

Have you always wanted to Learn to Use Unix Commands but didn’t know where to begin with?

With this course, even a non-technical person having no prior knowledge about the Unix Command line and Unix Commands can learn to use them efficiently in no time.

Well, we have not followed any shortcuts and explained to you every step in every detail so that you can efficiently learn and apply them practically.

In this course, We have taught basic as well as advanced Unix commands in a well-structured manner so that you can learn to use them easily and efficiently.

If you are the one who is looking to start to use the Unix Command line or already started, then this course is for you.

This course will teach you how you can make use of Unix commands to get the job done in no time.

This course on Introduction to Unix Commands gives you details of each and every step required in a well-structured manner and also the detailed instruction required to start with the different Unix Commands. Looking Forward to seeing you guys in the course.

Start Upskilling yourself by Learning to use different Unix Commands Today!

What you’ll learn

  •        This is an Introduction Course to the Unix Command Line.
  •        This Course will provide a solid foundation to work with the Unix Commands.
  •        Different Operations on Files as Well Directories using Unix Commands
  •        Wide range of commonly used and advance Unix Commands

Are there any course requirements or prerequisites?

       No Command Line / Unix Commands Experience needed. You will learn Everything You need to know.

Who this course is for:

  •        Anyone who wants to learn to use Unix Commands.

#unix #commandline #linux

Unix Commands for Beginners
Waylon  Bruen

Waylon Bruen

1654857420

Lwc: A Live-updating Version Of The UNIX Wc Command

lwc

A live-updating version of the UNIX wc command.

demo.gif

Installation

You can get a prebuilt binary for every major platform from the Releases page. Just extract it somewhere under your PATH and you're good to go.

Alternatively, use go get to build from source:

go get -u github.com/timdp/lwc/cmd/lwc

On Debian-compatible Linux distributions such as Ubuntu, you can also use the experimental APT repository:

echo 'deb [allow-insecure=yes] https://tmdpw.eu/lwc-releases/debian/ any main' |
  sudo tee /etc/apt/sources.list.d/lwc.list
sudo apt update
sudo apt install lwc

Usage

lwc [OPTION]... [FILE]...
lwc [OPTION]... --files0-from=F

Without any options, lwc will count the number of lines, words, and bytes in standard input, and write them to standard output. Contrary to wc, it will also update standard output while it is still counting.

All the standard wc options are supported:

  • --lines or -l
  • --words or -w
  • --chars or -m
  • --bytes or -c
  • --max-line-length or -L
  • --files0-from=F
  • --help
  • --version

In addition, the output update interval can be configured by passing either --interval=TIME or -i TIME, where TIME is a duration in milliseconds. The default update interval is 100 ms.

Examples

Count the number of lines in a big file:

lwc --lines big-file

Run a slow command and count the number of lines and words logged:

slow-command | lwc --lines --words

Benchmark lwc's throughput by counting random bytes (press Ctrl+C to exit):

lwc --bytes < /dev/urandom

Caveats

You can mostly use lwc as a drop-in replacement for wc. However, you should be aware of the following:

The behavior of the --words and --chars options is slightly different from wc's implementation. You might get different values with certain binary data.

While lwc is pretty fast, you won't get the same raw throughput as with wc. The reason for that is (probably) twofold: the code isn't optimized for performance, and a Go implementation is no match for a C one.

JavaScript Version

This utility briefly existed as a Node.js package. I'm keeping the code around for educational purposes, but I will no longer be maintaining it.

Author: timdp
Source Code: https://github.com/timdp/lwc 
License: MIT license

#go #golang #unix 

Lwc: A Live-updating Version Of The UNIX Wc Command

Gunicorn: A Python WSGI HTTP Server for UNIX

Gunicorn

Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork worker model ported from Ruby's Unicorn project. The Gunicorn server is broadly compatible with various web frameworks, simply implemented, light on server resource usage, and fairly speedy.

Feel free to join us in #gunicorn on Freenode.

Documentation

The documentation is hosted at https://docs.gunicorn.org.

Installation

Gunicorn requires Python 3.x >= 3.5.

Install from PyPI:

$ pip install gunicorn

Usage

Basic usage:

$ gunicorn [OPTIONS] APP_MODULE

Where APP_MODULE is of the pattern $(MODULE_NAME):$(VARIABLE_NAME). The module name can be a full dotted path. The variable name refers to a WSGI callable that should be found in the specified module.

Example with test app:

$ cd examples
$ gunicorn --workers=2 test:app

Contributing

See our complete contributor's guide for more details.

License

Gunicorn is released under the MIT License. See the LICENSE file for more details.

Author: benoitc
Source Code: https://github.com/benoitc/gunicorn
License: View license

#python #gunicorn #unix 

Gunicorn: A Python WSGI HTTP Server for UNIX

Immortal: A *nix Cross-platform (OS Agnostic) Supervisor

⭕ immortal

A *nix cross-platform (OS agnostic) supervisor

run on behalf other system user

If services need to run on behalf other system user www, nobody, www-data, not root, immortal should be compiled from source for the desired target/architecture, otherwise, this error may be returned:

Error looking up user: "www". user: Lookup requires cgo

See more: https://golang.org/cmd/cgo/

If using FreeBSD or macOS you can install using pkg/ports or homebrew, for other platforms work is in progress, any help for making the port/package for other systems would be appreciated.

Compile from source

Setup go environment https://golang.org/doc/install

go >= 1.12 is required

For example using $HOME/go for your workspace

$ export GOPATH=$HOME/go

Create the directory:

$ mkdir -p $HOME/go/src/github.com/immortal

Clone project into that directory:

$ git clone git@github.com:immortal/immortal.git $HOME/go/src/github.com/immortal/immortal

Build by just typing make:

$ cd $HOME/go/src/github.com/immortal/immortal
$ make

To install/uninstall:

$ make install
$ make uninstall

configuration example

Content of file /usr/local/etc/immortal/www.yml:

# pkg install go-www
cmd: www
cwd: /usr/ports
log:
    file: /var/log/www.log
    age: 10  # seconds
    num: 7   # int
    size: 1  # MegaBytes
wait: 1
require:
  - foo
  - bar

If foo and bar are not running, the service www will not be started. Skip age, num & size options to avoid log-rotation completely.

foo and bar are the names for the services defined on the same path www.yaml is located, foo.yml & bar.yml

Paths

When using immortaldir:

/usr/local/etc/immortal
|--foo.yml
|--bar.yml
`--www.yml

The name of the file.yml will be used to reference the service to be daemonized excluding the extension .yml.:

foo
bar
www

/var/run/immortal/

/var/run/immortal
|--foo
|  |-lock
|  `-immortal.sock
|--bar
|  |-lock
|  `-immortal.sock
`--www
   |-lock
   `-immortal.sock

immortal like non-root user

Any service launched like not using using immortaldir will follow this structure:

~/.immortal
|--(pid)
|  |--lock
|  `--immortal.sock
|--(pid)
|  |--lock
|  `--immortal.sock
`--(pid)
   |--lock
   `--immortal.sock

immortalctl

Will print current status and allow to manage the services

debug

pgrep -fl "immortal -ctl"  | awk '{print $1}' | xargs watch -n .1 pstree -p

Test status using curl & jq

status:

curl --unix-socket immortal.sock http:/status -s | jq

note the single '/' https://superuser.com/a/925610/284722

down:

curl --unix-socket immortal.sock http://im/signal/d -s | jq

up:

curl --unix-socket immortal.sock http://im/signal/u -s | jq

https://immortal.run/

GitHub release GoDoc contributions welcome

Linux precompiled binaries

deb rpm

Author: immortal
Source Code: https://github.com/immortal/immortal 
License: BSD-3-Clause license

#go #golang #http #unix 

Immortal: A *nix Cross-platform (OS Agnostic) Supervisor
Callum Slater

Callum Slater

1646886934

The Art of Command Line - Master the Command Line

The Art of Command Line

Note: I'm planning to revise this and looking for a new co-author to help with expanding this into a more comprehensive guide. While it's very popular, it could be broader and a bit deeper. If you like to write and are close to being an expert on this material and willing to consider helping, please drop me a note at josh (0x40) holloway.com. –jlevy, Holloway. Thank you!

curl -s 'https://raw.githubusercontent.com/jlevy/the-art-of-command-line/master/README.md' | egrep -o '\w+' | tr -d '`' | cowsay -W50

Fluency on the command line is a skill often neglected or considered arcane, but it improves your flexibility and productivity as an engineer in both obvious and subtle ways. This is a selection of notes and tips on using the command-line that we've found useful when working on Linux. Some tips are elementary, and some are fairly specific, sophisticated, or obscure. This page is not long, but if you can use and recall all the items here, you know a lot.

This work is the result of many authors and translators. Some of this originally appeared on Quora, but it has since moved to GitHub, where people more talented than the original author have made numerous improvements. Please submit a question if you have a question related to the command line. Please contribute if you see an error or something that could be better!

Meta

Scope:

  • This guide is for both beginners and experienced users. The goals are breadth (everything important), specificity (give concrete examples of the most common case), and brevity (avoid things that aren't essential or digressions you can easily look up elsewhere). Every tip is essential in some situation or significantly saves time over alternatives.
  • This is written for Linux, with the exception of the "macOS only" and "Windows only" sections. Many of the other items apply or can be installed on other Unices or macOS (or even Cygwin).
  • The focus is on interactive Bash, though many tips apply to other shells and to general Bash scripting.
  • It includes both "standard" Unix commands as well as ones that require special package installs -- so long as they are important enough to merit inclusion.

Notes:

  • To keep this to one page, content is implicitly included by reference. You're smart enough to look up more detail elsewhere once you know the idea or command to Google. Use apt, yum, dnf, pacman, pip or brew (as appropriate) to install new programs.
  • Use Explainshell to get a helpful breakdown of what commands, options, pipes etc. do.

Basics

Learn basic Bash. Actually, type man bash and at least skim the whole thing; it's pretty easy to follow and not that long. Alternate shells can be nice, but Bash is powerful and always available (learning only zsh, fish, etc., while tempting on your own laptop, restricts you in many situations, such as using existing servers).

Learn at least one text-based editor well. The nano editor is one of the simplest for basic editing (opening, editing, saving, searching). However, for the power user in a text terminal, there is no substitute for Vim (vi), the hard-to-learn but venerable, fast, and full-featured editor. Many people also use the classic Emacs, particularly for larger editing tasks. (Of course, any modern software developer working on an extensive project is unlikely to use only a pure text-based editor and should also be familiar with modern graphical IDEs and tools.)

Finding documentation:

  • Know how to read official documentation with man (for the inquisitive, man man lists the section numbers, e.g. 1 is "regular" commands, 5 is files/conventions, and 8 are for administration). Find man pages with apropos.
  • Know that some commands are not executables, but Bash builtins, and that you can get help on them with help and help -d. You can find out whether a command is an executable, shell builtin or an alias by using type command.
  • curl cheat.sh/command will give a brief "cheat sheet" with common examples of how to use a shell command.

Learn about redirection of output and input using > and < and pipes using |. Know > overwrites the output file and >> appends. Learn about stdout and stderr.

Learn about file glob expansion with * (and perhaps ? and [...]) and quoting and the difference between double " and single ' quotes. (See more on variable expansion below.)

Be familiar with Bash job management: &, ctrl-z, ctrl-c, jobs, fg, bg, kill, etc.

Know ssh, and the basics of passwordless authentication, via ssh-agent, ssh-add, etc.

Basic file management: ls and ls -l (in particular, learn what every column in ls -l means), less, head, tail and tail -f (or even better, less +F), ln and ln -s (learn the differences and advantages of hard versus soft links), chown, chmod, du (for a quick summary of disk usage: du -hs *). For filesystem management, df, mount, fdisk, mkfs, lsblk. Learn what an inode is (ls -i or df -i).

Basic network management: ip or ifconfig, dig, traceroute, route.

Learn and use a version control management system, such as git.

Know regular expressions well, and the various flags to grep/egrep. The -i, -o, -v, -A, -B, and -C options are worth knowing.

Learn to use apt-get, yum, dnf or pacman (depending on distro) to find and install packages. And make sure you have pip to install Python-based command-line tools (a few below are easiest to install via pip).

Everyday use

In Bash, use Tab to complete arguments or list all available commands and ctrl-r to search through command history (after pressing, type to search, press ctrl-r repeatedly to cycle through more matches, press Enter to execute the found command, or hit the right arrow to put the result in the current line to allow editing).

In Bash, use ctrl-w to delete the last word, and ctrl-u to delete the content from current cursor back to the start of the line. Use alt-b and alt-f to move by word, ctrl-a to move cursor to beginning of line, ctrl-e to move cursor to end of line, ctrl-k to kill to the end of the line, ctrl-l to clear the screen. See man readline for all the default keybindings in Bash. There are a lot. For example alt-. cycles through previous arguments, and alt-* expands a glob.

Alternatively, if you love vi-style key-bindings, use set -o vi (and set -o emacs to put it back).

For editing long commands, after setting your editor (for example export EDITOR=vim), ctrl-x ctrl-e will open the current command in an editor for multi-line editing. Or in vi style, escape-v.

To see recent commands, use history. Follow with !n (where n is the command number) to execute again. There are also many abbreviations you can use, the most useful probably being !$ for last argument and !! for last command (see "HISTORY EXPANSION" in the man page). However, these are often easily replaced with ctrl-r and alt-..

Go to your home directory with cd. Access files relative to your home directory with the ~ prefix (e.g. ~/.bashrc). In sh scripts refer to the home directory as $HOME.

To go back to the previous working directory: cd -.

If you are halfway through typing a command but change your mind, hit alt-# to add a # at the beginning and enter it as a comment (or use ctrl-a, #, enter). You can then return to it later via command history.

Use xargs (or parallel). It's very powerful. Note you can control how many items execute per line (-L) as well as parallelism (-P). If you're not sure if it'll do the right thing, use xargs echo first. Also, -I{} is handy. Examples:

      find . -name '*.py' | xargs grep some_function
      cat hosts | xargs -I{} ssh root@{} hostname

pstree -p is a helpful display of the process tree.

Use pgrep and pkill to find or signal processes by name (-f is helpful).

Know the various signals you can send processes. For example, to suspend a process, use kill -STOP [pid]. For the full list, see man 7 signal

Use nohup or disown if you want a background process to keep running forever.

Check what processes are listening via netstat -lntp or ss -plat (for TCP; add -u for UDP) or lsof -iTCP -sTCP:LISTEN -P -n (which also works on macOS).

See also lsof and fuser for open sockets and files.

See uptime or w to know how long the system has been running.

Use alias to create shortcuts for commonly used commands. For example, alias ll='ls -latr' creates a new alias ll.

Save aliases, shell settings, and functions you commonly use in ~/.bashrc, and arrange for login shells to source it. This will make your setup available in all your shell sessions.

Put the settings of environment variables as well as commands that should be executed when you login in ~/.bash_profile. Separate configuration will be needed for shells you launch from graphical environment logins and cron jobs.

Synchronize your configuration files (e.g. .bashrc and .bash_profile) among various computers with Git.

Understand that care is needed when variables and filenames include whitespace. Surround your Bash variables with quotes, e.g. "$FOO". Prefer the -0 or -print0 options to enable null characters to delimit filenames, e.g. locate -0 pattern | xargs -0 ls -al or find / -print0 -type d | xargs -0 ls -al. To iterate on filenames containing whitespace in a for loop, set your IFS to be a newline only using IFS=$'\n'.

In Bash scripts, use set -x (or the variant set -v, which logs raw input, including unexpanded variables and comments) for debugging output. Use strict modes unless you have a good reason not to: Use set -e to abort on errors (nonzero exit code). Use set -u to detect unset variable usages. Consider set -o pipefail too, to abort on errors within pipes (though read up on it more if you do, as this topic is a bit subtle). For more involved scripts, also use trap on EXIT or ERR. A useful habit is to start a script like this, which will make it detect and abort on common errors and print a message:

      set -euo pipefail
      trap "echo 'error: Script failed: see failed command above'" ERR

In Bash scripts, subshells (written with parentheses) are convenient ways to group commands. A common example is to temporarily move to a different working directory, e.g.

      # do something in current dir
      (cd /some/other/dir && other-command)
      # continue in original dir

In Bash, note there are lots of kinds of variable expansion. Checking a variable exists: ${name:?error message}. For example, if a Bash script requires a single argument, just write input_file=${1:?usage: $0 input_file}. Using a default value if a variable is empty: ${name:-default}. If you want to have an additional (optional) parameter added to the previous example, you can use something like output_file=${2:-logfile}. If $2 is omitted and thus empty, output_file will be set to logfile. Arithmetic expansion: i=$(( (i + 1) % 5 )). Sequences: {1..10}. Trimming of strings: ${var%suffix} and ${var#prefix}. For example if var=foo.pdf, then echo ${var%.pdf}.txt prints foo.txt.

Brace expansion using {...} can reduce having to re-type similar text and automate combinations of items. This is helpful in examples like mv foo.{txt,pdf} some-dir (which moves both files), cp somefile{,.bak} (which expands to cp somefile somefile.bak) or mkdir -p test-{a,b,c}/subtest-{1,2,3} (which expands all possible combinations and creates a directory tree). Brace expansion is performed before any other expansion.

The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and filename expansion. (For example, a range like {1..20} cannot be expressed with variables using {$a..$b}. Use seq or a for loop instead, e.g., seq $a $b or for((i=a; i<=b; i++)); do ... ; done.)

The output of a command can be treated like a file via <(some command) (known as process substitution). For example, compare local /etc/hosts with a remote one:

      diff /etc/hosts <(ssh somehost cat /etc/hosts)

When writing scripts you may want to put all of your code in curly braces. If the closing brace is missing, your script will be prevented from executing due to a syntax error. This makes sense when your script is going to be downloaded from the web, since it prevents partially downloaded scripts from executing:

{
      # Your code here
}

A "here document" allows redirection of multiple lines of input as if from a file:

cat <<EOF
input
on multiple lines
EOF

In Bash, redirect both standard output and standard error via: some-command >logfile 2>&1 or some-command &>logfile. Often, to ensure a command does not leave an open file handle to standard input, tying it to the terminal you are in, it is also good practice to add </dev/null.

Use man ascii for a good ASCII table, with hex and decimal values. For general encoding info, man unicode, man utf-8, and man latin1 are helpful.

Use screen or tmux to multiplex the screen, especially useful on remote ssh sessions and to detach and re-attach to a session. byobu can enhance screen or tmux by providing more information and easier management. A more minimal alternative for session persistence only is dtach.

In ssh, knowing how to port tunnel with -L or -D (and occasionally -R) is useful, e.g. to access web sites from a remote server.

It can be useful to make a few optimizations to your ssh configuration; for example, this ~/.ssh/config contains settings to avoid dropped connections in certain network environments, uses compression (which is helpful with scp over low-bandwidth connections), and multiplex channels to the same server with a local control file:

      TCPKeepAlive=yes
      ServerAliveInterval=15
      ServerAliveCountMax=6
      Compression=yes
      ControlMaster auto
      ControlPath /tmp/%r@%h:%p
      ControlPersist yes

A few other options relevant to ssh are security sensitive and should be enabled with care, e.g. per subnet or host or in trusted networks: StrictHostKeyChecking=no, ForwardAgent=yes

Consider mosh an alternative to ssh that uses UDP, avoiding dropped connections and adding convenience on the road (requires server-side setup).

To get the permissions on a file in octal form, which is useful for system configuration but not available in ls and easy to bungle, use something like

      stat -c '%A %a %n' /etc/timezone

For interactive selection of values from the output of another command, use percol or fzf.

For interaction with files based on the output of another command (like git), use fpp (PathPicker).

For a simple web server for all files in the current directory (and subdirs), available to anyone on your network, use: python -m SimpleHTTPServer 7777 (for port 7777 and Python 2) and python -m http.server 7777 (for port 7777 and Python 3).

For running a command as another user, use sudo. Defaults to running as root; use -u to specify another user. Use -i to login as that user (you will be asked for your password).

For switching the shell to another user, use su username or su - username. The latter with "-" gets an environment as if another user just logged in. Omitting the username defaults to root. You will be asked for the password of the user you are switching to.

Know about the 128K limit on command lines. This "Argument list too long" error is common when wildcard matching large numbers of files. (When this happens alternatives like find and xargs may help.)

For a basic calculator (and of course access to Python in general), use the python interpreter. For example,

>>> 2+3
5

Processing files and data

To locate a file by name in the current directory, find . -iname '*something*' (or similar). To find a file anywhere by name, use locate something (but bear in mind updatedb may not have indexed recently created files).

For general searching through source or data files, there are several options more advanced or faster than grep -r, including (in rough order from older to newer) ack, ag ("the silver searcher"), and rg (ripgrep).

To convert HTML to text: lynx -dump -stdin

For Markdown, HTML, and all kinds of document conversion, try pandoc. For example, to convert a Markdown document to Word format: pandoc README.md --from markdown --to docx -o temp.docx

If you must handle XML, xmlstarlet is old but good.

For JSON, use jq. For interactive use, also see jid and jiq.

For YAML, use shyaml.

For Excel or CSV files, csvkit provides in2csv, csvcut, csvjoin, csvgrep, etc.

For Amazon S3, s3cmd is convenient and s4cmd is faster. Amazon's aws and the improved saws are essential for other AWS-related tasks.

Know about sort and uniq, including uniq's -u and -d options -- see one-liners below. See also comm.

Know about cut, paste, and join to manipulate text files. Many people use cut but forget about join.

Know about wc to count newlines (-l), characters (-m), words (-w) and bytes (-c).

Know about tee to copy from stdin to a file and also to stdout, as in ls -al | tee file.txt.

For more complex calculations, including grouping, reversing fields, and statistical calculations, consider datamash.

Know that locale affects a lot of command line tools in subtle ways, including sorting order (collation) and performance. Most Linux installations will set LANG or other locale variables to a local setting like US English. But be aware sorting will change if you change locale. And know i18n routines can make sort or other commands run many times slower. In some situations (such as the set operations or uniqueness operations below) you can safely ignore slow i18n routines entirely and use traditional byte-based sort order, using export LC_ALL=C.

You can set a specific command's environment by prefixing its invocation with the environment variable settings, as in TZ=Pacific/Fiji date.

Know basic awk and sed for simple data munging. See One-liners for examples.

To replace all occurrences of a string in place, in one or more files:

      perl -pi.bak -e 's/old-string/new-string/g' my-files-*.txt

To rename multiple files and/or search and replace within files, try repren. (In some cases the rename command also allows multiple renames, but be careful as its functionality is not the same on all Linux distributions.)

      # Full rename of filenames, directories, and contents foo -> bar:
      repren --full --preserve-case --from foo --to bar .
      # Recover backup files whatever.bak -> whatever:
      repren --renames --from '(.*)\.bak' --to '\1' *.bak
      # Same as above, using rename, if available:
      rename 's/\.bak$//' *.bak

As the man page says, rsync really is a fast and extraordinarily versatile file copying tool. It's known for synchronizing between machines but is equally useful locally. When security restrictions allow, using rsync instead of scp allows recovery of a transfer without restarting from scratch. It also is among the fastest ways to delete large numbers of files:

mkdir empty && rsync -r --delete empty/ some-dir && rmdir some-dir

For monitoring progress when processing files, use pv, pycp, pmonitor, progress, rsync --progress, or, for block-level copying, dd status=progress.

Use shuf to shuffle or select random lines from a file.

Know sort's options. For numbers, use -n, or -h for handling human-readable numbers (e.g. from du -h). Know how keys work (-t and -k). In particular, watch out that you need to write -k1,1 to sort by only the first field; -k1 means sort according to the whole line. Stable sort (sort -s) can be useful. For example, to sort first by field 2, then secondarily by field 1, you can use sort -k1,1 | sort -s -k2,2.

If you ever need to write a tab literal in a command line in Bash (e.g. for the -t argument to sort), press ctrl-v [Tab] or write $'\t' (the latter is better as you can copy/paste it).

The standard tools for patching source code are diff and patch. See also diffstat for summary statistics of a diff and sdiff for a side-by-side diff. Note diff -r works for entire directories. Use diff -r tree1 tree2 | diffstat for a summary of changes. Use vimdiff to compare and edit files.

For binary files, use hd, hexdump or xxd for simple hex dumps and bvi, hexedit or biew for binary editing.

Also for binary files, strings (plus grep, etc.) lets you find bits of text.

For binary diffs (delta compression), use xdelta3.

To convert text encodings, try iconv. Or uconv for more advanced use; it supports some advanced Unicode things. For example:

      # Displays hex codes or actual names of characters (useful for debugging):
      uconv -f utf-8 -t utf-8 -x '::Any-Hex;' < input.txt
      uconv -f utf-8 -t utf-8 -x '::Any-Name;' < input.txt
      # Lowercase and removes all accents (by expanding and dropping them):
      uconv -f utf-8 -t utf-8 -x '::Any-Lower; ::Any-NFD; [:Nonspacing Mark:] >; ::Any-NFC;' < input.txt > output.txt

To split files into pieces, see split (to split by size) and csplit (to split by a pattern).

Date and time: To get the current date and time in the helpful ISO 8601 format, use date -u +"%Y-%m-%dT%H:%M:%SZ" (other options are problematic). To manipulate date and time expressions, use dateadd, datediff, strptime etc. from dateutils.

Use zless, zmore, zcat, and zgrep to operate on compressed files.

File attributes are settable via chattr and offer a lower-level alternative to file permissions. For example, to protect against accidental file deletion the immutable flag: sudo chattr +i /critical/directory/or/file

Use getfacl and setfacl to save and restore file permissions. For example:

   getfacl -R /some/path > permissions.txt
   setfacl --restore=permissions.txt

To create empty files quickly, use truncate (creates sparse file), fallocate (ext4, xfs, btrfs and ocfs2 filesystems), xfs_mkfile (almost any filesystems, comes in xfsprogs package), mkfile (for Unix-like systems like Solaris, Mac OS).

System debugging

For web debugging, curl and curl -I are handy, or their wget equivalents, or the more modern httpie.

To know current cpu/disk status, the classic tools are top (or the better htop), iostat, and iotop. Use iostat -mxz 15 for basic CPU and detailed per-partition disk stats and performance insight.

For network connection details, use netstat and ss.

For a quick overview of what's happening on a system, dstat is especially useful. For broadest overview with details, use glances.

To know memory status, run and understand the output of free and vmstat. In particular, be aware the "cached" value is memory held by the Linux kernel as file cache, so effectively counts toward the "free" value.

Java system debugging is a different kettle of fish, but a simple trick on Oracle's and some other JVMs is that you can run kill -3 <pid> and a full stack trace and heap summary (including generational garbage collection details, which can be highly informative) will be dumped to stderr/logs. The JDK's jps, jstat, jstack, jmap are useful. SJK tools are more advanced.

Use mtr as a better traceroute, to identify network issues.

For looking at why a disk is full, ncdu saves time over the usual commands like du -sh *.

To find which socket or process is using bandwidth, try iftop or nethogs.

The ab tool (comes with Apache) is helpful for quick-and-dirty checking of web server performance. For more complex load testing, try siege.

For more serious network debugging, wireshark, tshark, or ngrep.

Know about strace and ltrace. These can be helpful if a program is failing, hanging, or crashing, and you don't know why, or if you want to get a general idea of performance. Note the profiling option (-c), and the ability to attach to a running process (-p). Use trace child option (-f) to avoid missing important calls.

Know about ldd to check shared libraries etc — but never run it on untrusted files.

Know how to connect to a running process with gdb and get its stack traces.

Use /proc. It's amazingly helpful sometimes when debugging live problems. Examples: /proc/cpuinfo, /proc/meminfo, /proc/cmdline, /proc/xxx/cwd, /proc/xxx/exe, /proc/xxx/fd/, /proc/xxx/smaps (where xxx is the process id or pid).

When debugging why something went wrong in the past, sar can be very helpful. It shows historic statistics on CPU, memory, network, etc.

For deeper systems and performance analyses, look at stap (SystemTap), perf, and sysdig.

Check what OS you're on with uname or uname -a (general Unix/kernel info) or lsb_release -a (Linux distro info).

Use dmesg whenever something's acting really funny (it could be hardware or driver issues).

If you delete a file and it doesn't free up expected disk space as reported by du, check whether the file is in use by a process: lsof | grep deleted | grep "filename-of-my-big-file"

One-liners

A few examples of piecing together commands:

It is remarkably helpful sometimes that you can do set intersection, union, and difference of text files via sort/uniq. Suppose a and b are text files that are already uniqued. This is fast, and works on files of arbitrary size, up to many gigabytes. (Sort is not limited by memory, though you may need to use the -T option if /tmp is on a small root partition.) See also the note about LC_ALL above and sort's -u option (left out for clarity below).

      sort a b | uniq > c   # c is a union b
      sort a b | uniq -d > c   # c is a intersect b
      sort a b b | uniq -u > c   # c is set difference a - b

Pretty-print two JSON files, normalizing their syntax, then coloring and paginating the result:

      diff <(jq --sort-keys . < file1.json) <(jq --sort-keys . < file2.json) | colordiff | less -R

Use grep . * to quickly examine the contents of all files in a directory (so each line is paired with the filename), or head -100 * (so each file has a heading). This can be useful for directories filled with config settings like those in /sys, /proc, /etc.

Summing all numbers in the third column of a text file (this is probably 3X faster and 3X less code than equivalent Python):

      awk '{ x += $3 } END { print x }' myfile

To see sizes/dates on a tree of files, this is like a recursive ls -l but is easier to read than ls -lR:

      find . -type f -ls

Say you have a text file, like a web server log, and a certain value that appears on some lines, such as an acct_id parameter that is present in the URL. If you want a tally of how many requests for each acct_id:

      egrep -o 'acct_id=[0-9]+' access.log | cut -d= -f2 | sort | uniq -c | sort -rn

To continuously monitor changes, use watch, e.g. check changes to files in a directory with watch -d -n 2 'ls -rtlh | tail' or to network settings while troubleshooting your wifi settings with watch -d -n 2 ifconfig.

Run this function to get a random tip from this document (parses Markdown and extracts an item):

      function taocl() {
        curl -s https://raw.githubusercontent.com/jlevy/the-art-of-command-line/master/README.md |
          sed '/cowsay[.]png/d' |
          pandoc -f markdown -t html |
          xmlstarlet fo --html --dropdtd |
          xmlstarlet sel -t -v "(html/body/ul/li[count(p)>0])[$RANDOM mod last()+1]" |
          xmlstarlet unesc | fmt -80 | iconv -t US
      }

Obscure but useful

expr: perform arithmetic or boolean operations or evaluate regular expressions

m4: simple macro processor

yes: print a string a lot

cal: nice calendar

env: run a command (useful in scripts)

printenv: print out environment variables (useful in debugging and scripts)

look: find English words (or lines in a file) beginning with a string

cut, paste and join: data manipulation

fmt: format text paragraphs

pr: format text into pages/columns

fold: wrap lines of text

column: format text fields into aligned, fixed-width columns or tables

expand and unexpand: convert between tabs and spaces

nl: add line numbers

seq: print numbers

bc: calculator

factor: factor integers

gpg: encrypt and sign files

toe: table of terminfo entries

nc: network debugging and data transfer

socat: socket relay and tcp port forwarder (similar to netcat)

slurm: network traffic visualization

dd: moving data between files or devices

file: identify type of a file

tree: display directories and subdirectories as a nesting tree; like ls but recursive

stat: file info

time: execute and time a command

timeout: execute a command for specified amount of time and stop the process when the specified amount of time completes.

lockfile: create semaphore file that can only be removed by rm -f

logrotate: rotate, compress and mail logs.

watch: run a command repeatedly, showing results and/or highlighting changes

when-changed: runs any command you specify whenever it sees file changed. See inotifywait and entr as well.

tac: print files in reverse

comm: compare sorted files line by line

strings: extract text from binary files

tr: character translation or manipulation

iconv or uconv: conversion for text encodings

split and csplit: splitting files

sponge: read all input before writing it, useful for reading from then writing to the same file, e.g., grep -v something some-file | sponge some-file

units: unit conversions and calculations; converts furlongs per fortnight to twips per blink (see also /usr/share/units/definitions.units)

apg: generates random passwords

xz: high-ratio file compression

ldd: dynamic library info

nm: symbols from object files

ab or wrk: benchmarking web servers

strace: system call debugging

mtr: better traceroute for network debugging

cssh: visual concurrent shell

rsync: sync files and folders over SSH or in local file system

wireshark and tshark: packet capture and network debugging

ngrep: grep for the network layer

host and dig: DNS lookups

lsof: process file descriptor and socket info

dstat: useful system stats

glances: high level, multi-subsystem overview

iostat: Disk usage stats

mpstat: CPU usage stats

vmstat: Memory usage stats

htop: improved version of top

last: login history

w: who's logged on

id: user/group identity info

sar: historic system stats

iftop or nethogs: network utilization by socket or process

ss: socket statistics

dmesg: boot and system error messages

sysctl: view and configure Linux kernel parameters at run time

hdparm: SATA/ATA disk manipulation/performance

lsblk: list block devices: a tree view of your disks and disk partitions

lshw, lscpu, lspci, lsusb, dmidecode: hardware information, including CPU, BIOS, RAID, graphics, devices, etc.

lsmod and modinfo: List and show details of kernel modules.

fortune, ddate, and sl: um, well, it depends on whether you consider steam locomotives and Zippy quotations "useful"

macOS only

These are items relevant only on macOS.

Package management with brew (Homebrew) and/or port (MacPorts). These can be used to install on macOS many of the above commands.

Copy output of any command to a desktop app with pbcopy and paste input from one with pbpaste.

To enable the Option key in macOS Terminal as an alt key (such as used in the commands above like alt-b, alt-f, etc.), open Preferences -> Profiles -> Keyboard and select "Use Option as Meta key".

To open a file with a desktop app, use open or open -a /Applications/Whatever.app.

Spotlight: Search files with mdfind and list metadata (such as photo EXIF info) with mdls.

Be aware macOS is based on BSD Unix, and many commands (for example ps, ls, tail, awk, sed) have many subtle variations from Linux, which is largely influenced by System V-style Unix and GNU tools. You can often tell the difference by noting a man page has the heading "BSD General Commands Manual." In some cases GNU versions can be installed, too (such as gawk and gsed for GNU awk and sed). If writing cross-platform Bash scripts, avoid such commands (for example, consider Python or perl) or test carefully.

To get macOS release information, use sw_vers.

Windows only

These items are relevant only on Windows.

Ways to obtain Unix tools under Windows

Access the power of the Unix shell under Microsoft Windows by installing Cygwin. Most of the things described in this document will work out of the box.

On Windows 10, you can use Windows Subsystem for Linux (WSL), which provides a familiar Bash environment with Unix command line utilities.

If you mainly want to use GNU developer tools (such as GCC) on Windows, consider MinGW and its MSYS package, which provides utilities such as bash, gawk, make and grep. MSYS doesn't have all the features compared to Cygwin. MinGW is particularly useful for creating native Windows ports of Unix tools.

Another option to get Unix look and feel under Windows is Cash. Note that only very few Unix commands and command-line options are available in this environment.

Useful Windows command-line tools

You can perform and script most Windows system administration tasks from the command line by learning and using wmic.

Native command-line Windows networking tools you may find useful include ping, ipconfig, tracert, and netstat.

You can perform many useful Windows tasks by invoking the Rundll32 command.

Cygwin tips and tricks

Install additional Unix programs with the Cygwin's package manager.

Use mintty as your command-line window.

Access the Windows clipboard through /dev/clipboard.

Run cygstart to open an arbitrary file through its registered application.

Access the Windows registry with regtool.

Note that a C:\ Windows drive path becomes /cygdrive/c under Cygwin, and that Cygwin's / appears under C:\cygwin on Windows. Convert between Cygwin and Windows-style file paths with cygpath. This is most useful in scripts that invoke Windows programs.

More resources

Disclaimer

With the exception of very small tasks, code is written so others can read it. With power comes responsibility. The fact you can do something in Bash doesn't necessarily mean you should! 


Download Details: 
Author: jlevy
Source Code: https://github.com/jlevy/the-art-of-command-line 

#windows #macos #linux #bash #unix

The Art of Command Line - Master the Command Line
Sheldon  Grant

Sheldon Grant

1645106820

Cross-env: Set Environment Variables Cross-platform

cross-env 🔀

Run scripts that set and use environment variables across platforms

🚨 NOTICE: cross-env still works well, but is in maintenance mode. No new features will be added, only serious and common-case bugs will be fixed, and it will only be kept up-to-date with Node.js over time. Learn more


The problem

Most Windows command prompts will choke when you set environment variables with NODE_ENV=production like that. (The exception is Bash on Windows, which uses native Bash.) Similarly, there's a difference in how windows and POSIX commands utilize environment variables. With POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%.

This solution

cross-env makes it so you can have a single command without worrying about setting or using the environment variable properly for the platform. Just set it like you would if it's running on a POSIX system, and cross-env will take care of setting it properly.

Installation

This module is distributed via npm which is bundled with node and should be installed as one of your project's devDependencies:

npm install --save-dev cross-env

WARNING! Make sure that when you're installing packages that you spell things correctly to avoid mistakenly installing malware

NOTE : Version 7 of cross-env only supports Node.js 10 and higher, to use it on Node.js 8 or lower install version 6 npm install --save-dev cross-env@6

Usage

I use this in my npm scripts:

{
  "scripts": {
    "build": "cross-env NODE_ENV=production webpack --config build/webpack.config.js"
  }
}

Ultimately, the command that is executed (using cross-spawn) is:

webpack --config build/webpack.config.js

The NODE_ENV environment variable will be set by cross-env

You can set multiple environment variables at a time:

{
  "scripts": {
    "build": "cross-env FIRST_ENV=one SECOND_ENV=two node ./my-program"
  }
}

You can also split a command into several ones, or separate the environment variables declaration from the actual command execution. You can do it this way:

{
  "scripts": {
    "parentScript": "cross-env GREET=\"Joe\" npm run childScript",
    "childScript": "cross-env-shell \"echo Hello $GREET\""
  }
}

Where childScript holds the actual command to execute and parentScript sets the environment variables to use. Then instead of run the childScript you run the parent. This is quite useful for launching the same command with different env variables or when the environment variables are too long to have everything in one line. It also means that you can use $GREET env var syntax even on Windows which would usually require it to be %GREET%.

If you precede a dollar sign with an odd number of backslashes the expression statement will not be replaced. Note that this means backslashes after the JSON string escaping took place. "FOO=\\$BAR" will not be replaced. "FOO=\\\\$BAR" will be replaced though.

Lastly, if you want to pass a JSON string (e.g., when using ts-loader), you can do as follows:

{
  "scripts": {
    "test": "cross-env TS_NODE_COMPILER_OPTIONS={\\\"module\\\":\\\"commonjs\\\"} node some_file.test.ts"
  }
}

Pay special attention to the triple backslash (\\\) before the double quotes (") and the absence of single quotes ('). Both of these conditions have to be met in order to work both on Windows and UNIX.

cross-env vs cross-env-shell

The cross-env module exposes two bins: cross-env and cross-env-shell. The first one executes commands using cross-spawn, while the second one uses the shell option from Node's spawn.

The main use case for cross-env-shell is when you need an environment variable to be set across an entire inline shell script, rather than just one command.

For example, if you want to have the environment variable apply to several commands in series then you will need to wrap those in quotes and use cross-env-shell instead of cross-env.

{
  "scripts": {
    "greet": "cross-env-shell GREETING=Hi NAME=Joe \"echo $GREETING && echo $NAME\""
  }
}

The rule of thumb is: if you want to pass to cross-env a command that contains special shell characters that you want interpreted, then use cross-env-shell. Otherwise stick to cross-env.

On Windows you need to use cross-env-shell, if you want to handle signal events inside of your program. A common case for that is when you want to capture a SIGINT event invoked by pressing Ctrl + C on the command-line interface.

Windows Issues

Please note that npm uses cmd by default and that doesn't support command substitution, so if you want to leverage that, then you need to update your .npmrc to set the script-shell to powershell. Learn more here.

Inspiration

I originally created this to solve a problem I was having with my npm scripts in angular-formly. This made contributing to the project much easier for Windows users.

Other Solutions

  • env-cmd - Reads environment variables from a file instead
  • @naholyr/cross-env - cross-env with support for setting default values

Issues

Looking to contribute? Look for the Good First Issue label.

🐛 Bugs

Please file an issue for bugs, missing documentation, or unexpected behavior.

See Bugs

💡 Feature Requests

This project is in maintenance mode and no new feature requests will be considered.

Learn more

Author: Kentcdodds
Source Code: https://github.com/kentcdodds/cross-env 
License: MIT License

#node #windows #macos #unix 

Cross-env: Set Environment Variables Cross-platform

Lessmd: Markdown in The Terminal

Lessmd

asciicast

Lessmd is a terminal viewer/pager with markdown and piping support.

Why ?

  • It is a JavaScript
  • Minimal and fast
  • Unix like pager with navigation
  • Displays markdown with colors
  • Can translate markdown into colored output
  • Configurable user interface
  • Supports files and pipes
  • With livereload (watch filechanges)
  • Markdown theming support

Usage

Pager mode:

  lessmd README.md

Shortcuts:

  • q or ctrl+c exit

Piping with another programs:

  lessmd < README.md

To save some output into a file, which you can use as a motd or an issue files.

 echo "# welcome\n * do not touch anything \n * just press Ctrl+D" \
 | lessmd | tee /etc/motd

pipe example

Installation

  npm install -g lessmd

Configuration

Lessmd looks for user settings inside of a home directory, the filename is .lessmd.js.

Example of the .lessmd.js:

module.exports = {
  colors : {    /// markdown theming colors
    text : ,    
    lang : ,
    heading : ,
    code : ,
    quote : ,
    em : ,
    codespan : ,
    strong : ,
    html : ,
    del : ,
    link : ,
    hr : ,
    listitem :,
  },
  theme : {
    draw : false       // disable any ui (header and footer bars)
    text : '',         // text style
    strong : ''        // bold text style
  },
  headerfn : function() { return 'header'; }, // custom header fn,
  footerfn : function() { return 'footer'; }  // custom footer fn
};

ChangeLog

1.2.1

  • Dependencies update

1.2.0

  • Bug with long slices

1.1.0 - 2016-11-15

  • Html options for marked (sanityze, smartypants)
  • h,j,k,l bindings
  • Smaller chunks colorization for view mode

1.0.1 - 2016-11-03

  • Added original less keybindings

Author: Linuxenko
Source Code: https://github.com/linuxenko/lessmd 
License: MIT License

#node #markdown #unix 

Lessmd: Markdown in The Terminal
Garry Taylor

Garry Taylor

1641974758

Learn Unix Shell for Beginners

A Unix shell is a command-line interpreter or shell that provides a command-line user interface for Unix-like operating systems. The shell is both an interactive command language and a scripting language, and is used by the operating system to control the execution of the system using shell scripts. This is Unix shell crash course tutorial for beginners where you will be learning about the following topics in details:

⭐️ Table of Contents ⭐️  
⌨️ (0:00)         Introduction
⌨️ (4:09)         Files and directories
⌨️ (14:05)       Creating and deleting
⌨️ (20:30)       Pipes and filters
⌨️ (29:42)       Permissions
⌨️ (40:37)       Finding things
⌨️ (49:59)       Job control
⌨️ (55:37)       Variables
⌨️ (1:02:26)   Secure shell

#unix #shell #linux 

Learn Unix Shell for Beginners