Linux Programming: Difference between revisions
Line 2,012: | Line 2,012: | ||
The '''sed''' command will replace the first occurrence of the pattern in each line. If we want to replace every occurrence, we need to add the '''g''' parameter at the end, as follows: | The '''sed''' command will replace the first occurrence of the pattern in each line. If we want to replace every occurrence, we need to add the '''g''' parameter at the end, as follows: | ||
<syntaxhighlight lang='bash'> | <syntaxhighlight lang='bash'> | ||
sed 's/pattern/replace/g' file | sed -i 's/pattern/replace/g' file | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 2,085: | Line 2,085: | ||
$ sed -r 's/.{4}//' file | $ sed -r 's/.{4}//' file | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Substitution of text: perl == | == Substitution of text: perl == | ||
* Add or remove 'chr' from vcf file https://www.biostars.org/p/18530/ | * Add or remove 'chr' from vcf file https://www.biostars.org/p/18530/ |
Revision as of 09:43, 8 March 2018
Shell Programming
Some Resources
- Bash shell scripting for Helix and Biowulf
- Shell Style Guide from Google
- http://explainshell.com/
- http://learnshell.org/
- http://tldp.org The Linux Documentation Project
- Bash Guide for Beginners
- BASH Programming - Introduction HOW-TO
- Advanced Bash-Scripting Guide
- Linux Shell Scripting Tutorial from cyberciti.biz
- Shell debugging
- 10 Useful Tips for Writing Effective Bash Scripts in Linux
- Ten Things I Wish I’d Known About bash & Learn Bash the Hard Way $4.99
Check shell scripts
ShellCheck & download the binary from Launchpad.
If a statement missed a single quote the shell may show an error on a different line (though the error message is still useful). Therefore it is useful to verify the syntax of the script first before running it.
Simple calculation
echo
echo $(( 11/5 )) # or echo $((11/5))
Note: only return an integer number.
bc: an arbitrary precision calculator language
bc -l <<< "11/5" # Without '-l' we only get the integer part # Or interactive bc -i scale=5 11/5 quit
where -l means to use the predefined math routines and <<< is a here string. Note bc returns a real number.
Here documents
<<
http://linux.die.net/abs-guide/here-docs.html
#!/bin/bash cat <<!FUNKY! hello this is a here document !FUNKY!
<<< here string
http://linux.die.net/abs-guide/x15683.html
Redirect
Redirecting output. File descriptor number 1 (2) means standard output (error).
./myProgram > stdout.txt # redirect std out to <stdout.txt> ./myProgram 2> stderr.txt # redirect std err to <stderr.txt> by using the 2> operator ./myProgram > stdout.txt 2> stderr.txt # combination of above two ./myProgram > stdout.txt 2>&1 # redirect std err to std out <stdout.txt> ./myProgram >& /dev/null # prevent writing std out and std err to the screen ps >> outptu.txt # append
Redirecting input
./myProgram < input.txt
Redirec to some location that needs sudo right
The following command does not work
sudo cat myFile > /opt/myFile
We can use something like
sudo sh -c 'cat myFile > /opt/myFile'
Create a simple text file with multiple lines; write data to a file in bash script
Each of the methods below can be used in a bash script.
# Method 1: printf. We can add \t for tab delimiter printf '%s \n' 'Line 1' 'Line 2' 'Line 3' > out.txt # Method 2: echo. We can add \t for tab delimiter echo -e 'Line 1\t12\t13 Line 2\t22\t23 Line 3\t32\t33' > out.txt # Method 3: echo echo $'Line 1\nLine 2\nLine 3' > out.txt # Method 4: here document, http://tldp.org/LDP/abs/html/here-docs.html # Does not work for tab character cat > out.txt <<EOF Line 1 Line 2 Line 3 EOF
See also How to use a here documents to write data to a file in bash script
>&
&> file is not part of the official POSIX shell spec, but has been added to many Bourne shells as a convenience extension (it originally comes from csh). In a portable shell script (and if you don't need portability, why are you writing a shell script?), use > file 2>&1 only.
Redirect Output and Errors To /dev/null
http://www.cyberciti.biz/faq/how-to-redirect-output-and-errors-to-devnull/
command > /dev/null 2>&1 # OR command &>/dev/null
tee -redirect to both a file and the screen same time
To redirect to both a file and the screen the same time, use tee command. See
- http://www.cyberciti.biz/faq/linux-redirect-error-output-to-file/
- http://www.cyberciti.biz/faq/saving-stdout-stderr-into-separate-files/
- https://en.wikipedia.org/wiki/Tee_(command)
- Linux tee Command Explained for Beginners (6 Examples)
command1 |& tee log.txt ## or ## command1 -arg |& tee log.txt ## or ## command1 2>&1 | tee log.txt # use the option '-a' for *append* echo "new line of text" | sudo tee -a /etc/apt/sources.list # redirect output of one command to another ls file* | tee output.txt | wc -l # streaming file (e.g. running an arduino sketch on Udoo) # for streaming files, cp command (still need Ctrl + c) will not # show anything on screen though copying is executed. cat /dev/ttymxc3 | tee out.txt # Ctrl + c
command > >(tee stdout.log) 2> >(tee stderr.log >&2)
Pipe
The operator is |.
ps > psout.txt sort psout.txt > pssort.out
can be simplified to
ps | sort > pssort.out
For example,
$ head /etc/passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync $cat /etc/passwd | cut -d: -f7 | sort | uniq -c | sort -nr 18 /bin/sh 13 /bin/false 2 /bin/bash 1 /bin/sync
where cut command will extract the 7th field separated by the : character and write to the output stream. sort command will sort alphabetically sorts the line it reads from its input and returns the new sort to its output. The uniq command will remove and count duplicated lines. The final sort command will sort its input numerically in reverse order.
Dash (-) at the end of a command mean?
For example
gzip -dc /cdrom/cdrom0/file.tar.gz | tar xvf –
- http://unix.stackexchange.com/questions/41828/what-does-dash-at-the-end-of-a-command-mean
- http://unix.stackexchange.com/questions/16357/usage-of-dash-in-place-of-a-filename
It means 'standard input' or anything that will be used (required or interpreted) by the software.
Process substitution
https://en.wikipedia.org/wiki/Process_substitution
Powerfulness of pipes
Consider the following commands (samtools gives its output on stdout which is a good opportunity to use pipes)
samtools mpileup -go temp.bcf -uf genome.fa dedup.bam bcftools call -vmO v -o sample1_raw.vcf temp.bcf
The disadvantage of this approach is it will create a temporary file (temp.bcf in this case). If the size of the temporary file is enormous large (several hundred of GB), it will waste/eat up the hard disk space no to say the time used to create the temporary file. If we use pipes, we can save the time and disk space of the temporary file.
samtools mpileup -uf genome.fa dedup.bam | bcftools call -vmO v -o sample1_raw.vcf
Send a stdout to a remote computer
See here (bypass SSH password) for a case (utilize cat, ssh and >> commands).
Execute a bash script downloaded (without saving first) from the internet
See the example of install Gitlab
sudo curl -sS https://packages.gitlab.com/install/repositories/gitlab/raspberry-pi2/script.deb.sh | sudo bash
where -s means silent and -S means showing error messages if it fails. Note that curl will download the file to standard output. So using the pipe operator is a reasonable sequence after running the curl.
Pipe vs redirect
- Pipe is used to pass output to another program or utility.
- Redirect is used to pass output to either a file or stream.
In other words, thing1 | thing2 does the same thing as thing1 > temp_file && thing2 < temp_file.
Shebang (#!)
A shebang is the character sequence consisting of the characters number sign and exclamation mark (that is, "#!") at the beginning of a script. See the Wikipedia page.
The syntax looks like
#! interpreter [optional-arg]
For example,
#!/bin/sh
— Execute the file using sh, the Bourne shell, or a compatible shell#!/bin/csh -f
— Execute the file using csh, the C shell, or a compatible shell, and suppress the execution of the user’s .cshrc file on startup#!/usr/bin/perl -T
— Execute using Perl with the option for taint checks
When Is It Better to Use #!/bin/bash Instead of #!/bin/sh in a Shell Script?
Howto Make Script More Portable With #!/usr/bin/env As a Shebang
https://www.cyberciti.biz/tips/finding-bash-perl-python-portably-using-env.html
This is useful if the interpreter location is different on Linux and Mac OSs.
# Linux $ which Rscript /usr/bin/Rscript # Mac $ which Rscript /usr/local/bin/Rscript
We can use the following on the first line of the shell script.
#!/usr/bin/env Rscript
Comments
For a single line, we can use the '#' sign.
For a block of code, we use
#!/bin/bash echo before comment : <<'END' bla bla blurfl END echo after comment
Variables
food=Banana echo $food food="Apple" echo $food
export -n command: remove from environment
https://linuxconfig.org/learning-linux-commands-export
It will export an environment variable to the subshell/forked process. For example
$ export MYVAR=10 # export a variable $ export -n MYVAR # remove a variable
To see the current process ID, use
echo $$
To create a new process, use
bash
When using the export command without any option and arguments it will simply print all names marked for an export to a child process.
$ export declare -x EDITOR="nano" declare -x HISTTIMEFORMAT="%d/%m/%y %T " declare -x HOME="/home/brb" declare -x LANG="en_US.UTF-8" declare -x LESSCLOSE="/usr/bin/lesspipe %s %s" declare -x LESSOPEN="| /usr/bin/lesspipe %s" declare -x LOGNAME="brb" ... declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games" declare -x PWD="/home/brb" declare -x SHELL="/bin/bash" ... declare -x USER="brb" declare -x VISUAL="nano"
String manipulation
http://www.thegeekstuff.com/2010/07/bash-string-manipulation/
dirname and basename commands
http://www.tldp.org/LDP/LG/issue18/bash.html
# On directories $ dirname ~/Downloads /home/chronos/user $ basename ~/Downloads Downloads # On files $ dirname ~/Downloads/DNA_Helix.zip /home/chronos/user/Downloads $ basename ~/Downloads/DNA_Helix.zip DNA_Helix.zip $ basename ~/Downloads/DNA_Helix.zip .zip DNA_Helix $ basename ~/Downloads/annovar.latest.tar.gz annovar.latest.tar.gz $ basename ~/Downloads/annovar.latest.tar.gz .gz annovar.latest.tar $ basename ~/Downloads/annovar.latest.tar.gz .tar.gz annovar.latest $ basename ~/Downloads/annovar.latest.tar.gz .latest.tar.gz annovar
Concatenate string variables (not safe)
http://stackoverflow.com/questions/4181703/how-can-i-concatenate-string-variables-in-bash
a='hello' b='world' c=$a$b echo $c # Bash also supports a += operator $ A="X Y" $ A+="Z" $ echo "$A"
Often we need to use "double quotes" around the string variables if the string variables represent some directories.
mkdir "tmp 1" touch "tmp 1/tmpfile" tmpvar="tmp 1" echo tmpvar # tmp 1 ls $tmpvar ls: cannot access tmp: No such file or directory ls: cannot access 1: No such file or directory ls "$tmpvar" # tmpfile
However, for integers
echo $a 24 ((a+=12)) echo $a 36
Note that the double parentheses construct in ((a+=12)) permits arithmetic expansion and evaluation.
Concatenate a string variable and a constant string - ${parameter}
Parameter substitution ${}. Cf $() for command execution
x=foo y=bar z=$x$y # $z is now "foobar" z="$x$y" # $z is still "foobar" z="$xand$y" # does not work z="${x}and$y" # does work, "fooandbar"
And
your_id=${USER}-on-${HOSTNAME} echo "$your_id" echo "Old \$PATH = $PATH" PATH=${PATH}:/opt/bin # Add /opt/bin to $PATH for duration of script. echo "New \$PATH = $PATH"
Command Execution - $(command)
$(command) `command` # ` is a backquote/backtick, not a single quotation sign # this is a legacy support; not recommended by https://www.shellcheck.net/
Note all new scripts should use the $(...) form, which was introduced to avoid some rather complex rules.
Example 1.
sudo apt-get install linux-headers-$(uname -r)
Example 2.
user=$(echo "$UID")
Example 3.
#!/bin/sh echo The current directory is $PWD echo The current users are $(who) sudo chown `id -u` SomeDir # change the ownership to the current user. Dangerous! # Or sudo chown `whoami` SomeDirOrSomeFile exit 0
Note that $(your expression) is a better way as it allows you to run nest expressions. For example,
cd $(dirname $(type -P touch))
will cd you into the directory containing the 'touch' command.
The concept of putting the result of a command into a script variable is very powerful, as it makes it easy to use existing commands in scripts and capture their output.
Arithmetic Expansion
$((...))
is a better alternative to the expr command. More examples:
for i in $(seq 1 3) do echo SRR$(( i + 1027170 ))'_1'.fastq done
Note that the single quote above is required. The above will output SRR1027171_1.fastq, SRR102172_1.fastq and SRR1027173_1.fastq.
Parameter Expansion
${parameter}
extract substring
https://www.cyberciti.biz/faq/how-to-extract-substring-in-bash/
${parameter:offset:length}
Example:
## define var named u ## u="this is a test" var="${u:10:4}" echo "${var}"
Or use the cut command.
u="this is a test" echo "$u" | cut -d' ' -f 4 echo "$u" | cut --delimiter=' ' --fields=4 ########################################## ## WHERE ## -d' ' : Use a whitespace as delimiter ## -f 4 : Select only 4th field ########################################## var="$(cut -d' ' -f 4 <<< $u)" echo "${var}"
Environment variables
$HOME $PATH $0 -- name of the shell script $# -- number of parameters passed (so it does include the program itself) $$ process ID of the shell script, often used inside a script for generating unique temp filenames $? -- the exit value of the last run command; 0 means OK and none-zero means something wrong $_ -- previous command's last argument
Example 1 (check if a command run successfully):
some_command if [ $? -eq 0 ]; then echo OK else echo FAIL fi # OR if some_command; then printf 'some_command succeeded\n' else printf 'some_command failed\n' fi $ tabix -f -p vcf ~/SeqTestdata/usefulvcf/hg19/CosmicCodingMuts.vcf.gz brb@brb-P45T-A:/tmp$ echo $? 0 $ tabix -f -p vcf ~/Downloads/CosmicCodingMuts.vcf.gz Not a BGZF file: /home/brb/Downloads/CosmicCodingMuts.vcf.gz tbx_index_build failed: /home/brb/Downloads/CosmicCodingMuts.vcf.gz $ echo $? 1
Example 2 (check whether a host is reachable)
ping DOMAIN -c2 &> /dev/null if [ $? -eq 0 ]; then echo Successful else echo Failure fi
where -c is used to limit the number of packets to be sent and &> /dev/null is used to redirect both stderr and stdout to /dev/null so that it won't be printed on the terminal.
Example 3 (check if users have supply a correct number of parameters):
#!/bin/bash if [ $# -ne 2 ]; then echo "Usage: $0 ProgramName filename" exit 1 fi match_text=$1 filename=$2
Example 4 (make a new directory and cd to it)
mkdir -p "newDir/subDir"; cd "$_"
Parameter variables
- Shell Parameter Expansion - Important !!
- http://tldp.org/LDP/abs/html/othertypesv.html
- https://bash.cyberciti.biz/guide/Pass_arguments_into_a_function
$1, $2, .... -- parameters given to the script $* -- list of all the parameters, in a single variable $@ -- subtle variation on $*. $! -- the process id of the last command run in the background.
Example 1.
#!/bin/bash echo "$1 likes to eat $2 and $3 every day." echo "bye:-)"
Example 2.
$ touch /tmp/tmpfile_$$ $ set foo bar bam $ echo $# 3 $ echo $@ foo bar bam $ set foo bar bam & [1] 28212 $ echo $! 28212 [1]+ Done set foo bar bam
We can also use parentheses around the variable name.
QT_ARCH=x86_64 QT_SDK_BINARY=QtSDK-4.8.0-${QT_ARCH}.tar.gz QT_SD_URL=https://xxx.com/$QT_SDK_BINARY
How do I rename the extension for a batch of files? See man bash Shell Parameter Expansion
# Solution 1: for file in *.html; do mv "$file" "`basename "$file" .html`.txt" done # Solution 2: for file in *.html do mv "$file" "${file%.html}.txt" done
Discard the extension name
$ vara=fillename.ext $ echo $vara fillename.ext $ echo ${vara::-4} # works on Bash 4.3, eg Ubuntu fillename $ echo ${vara::${#vara}-4} # works on Bash 4.1, eg Biowulf readhat
Another way (not assuming 3 letters for the suffix) https://www.cyberciti.biz/faq/unix-linux-extract-filename-and-extension-in-bash/
dest="/nas100/backups/servers/z/zebra/mysql.tgz" ## get file name i.e. basename such as mysql.tgz tempfile="${dest##*/}" ## display filename echo "${tempfile%.*}"
Or better with (See Extract filename and extension in Bash and Shell parameter expansion).
$ UEFI_ZIP_FILE="UDOOX86_B02-UEFI_Update_rel102.zip" $ UEFI_ZIP_DIR="${UEFI_ZIP_FILE%.*}" $ echo $UEFI_ZIP_DIR UDOOX86_B02-UEFI_Update_rel102 $ FILE="example.tar.gz" $ echo "${FILE%%.*}" example $ echo "${FILE%.*}" example.tar $ echo "${FILE#*.}" tar.gz $ echo "${FILE##*.}" gz
Space in variable value
Suppose we have a script file called 'foo' that can remove spaces from a file name. Note: tr command is used to delete characters specified by the '-d' parameter.
#!/bin/sh NAME=`ls $1 | tr -d ' '` echo $NAME mv $1 $NAME
Now we try the program:
$ touch 'file 1.txt' $ ./foo 'file 1.txt' ls: cannot access file: No such file or directory ls: cannot access 1.txt: No such file or directory mv: cannot stat ‘file’: No such file or directory
The way to fix the program is to use double quotes around $1
#!/bin/sh NAME=`ls "$1" | tr -d ' '` echo $NAME mv "$1" $NAME
and test it
$ ./foo "file 1.txt" file1.txt
If we concatenate the variable, put the double quotes around the variables, not the whole string.
$ rm "$outputDir/tmp/$tmpfd/tmpa" # fine $ rm "$outputDir/tmp/$tmpfd/tmp*.txt" rm: annovar6-12/tmp/tmp_bt20_raw/tmp*.txt: No such file or directory $ rm "$outputDir"/tmp/$tmpfd/tmp*.txt
Shell expansion
https://www.gnu.org/software/bash/manual/html_node/Shell-Expansions.html#Shell-Expansions
Brace expansion
Copy multiple types of extensions
cp -v *.{txt,jpg,png} destination/
Conditions
We can use the test command to check if a file exists. The command is test -f <filename>.
[] is just the same as writing test, and would always leave a space after the test word.
if test -f fred.c; then ...; fi if [ -f fred.c ] then ... fi if [ -f fred.c ]; then ... fi
What is the difference between test, [ and [[ ?
http://mywiki.wooledge.org/BashFAQ/031
[ ("test" command) and [[ ("new test" command) are used to evaluate expressions. [[ works only in Bash, Zsh and the Korn shell, and is more powerful; [ and test are available in POSIX shells.
test implements the old, portable syntax of the command. In almost all shells (the oldest Bourne shells are the exception), [ is a synonym for test (but requires a final argument of ]).
[[ is a new improved version of it, and is a keyword, not a program.
String comparison
== ==> strings are equal (== is a synonym for =) = ==> strings are equal != ==> strings are not equal -z ==> string is null -n ==> string is not null
For example, the following script check if users have provided an argument to the script.
$!/bin/sh if [ -z "$1"]; then echo "Provide a \"file name\", using quotes to nullify the space." exit 1 fi mv -i "$1" `ls "$1" | tri -d ' '`
where the -i parameter is to reconfirm the overwrite by the mv command.
To check whether Xcode (either full Xcode or command line developer tools only) has been installed or not on Mac
if [ -z "$(xcode-select -p 2>&1 | grep error)" ] then echo "Xcode has been installed"; else echo "Xcode has not been installed"; fi # only print out message if xcode was not found if [ -n "$(xcode-select -p 2>&1 | grep error)" ] then echo "Xcode has not been installed"; fi
note the 'error' keyword comes from macOS when the Xcode has not been installed. Also the double quotes around $( ) is needed to avoid the error [: too many arguments” error.
Arithmetic/Integer comparison
expr1 -eq expr2 ==> check equal expr1 -ne expr2 ==> check not equal expr1 -gt expr2 ==> expr1 > expr2 expr1 -ge expr2 ==> expr1 >= expr2 expr1 -lt expr2 ==> expr1 < expr2 expr1 -le expr2 ==> expr1 <= expr2 ! expr ==> opposite of expr
File conditionals
-d file ==> True if the file is a directory -e file ==> True if the file exists -f file ==> True if the file is a regular file -r file ==> True if the file is readable -s file ==> True if the file has non-zero size -w file ==> True if the file is writable -x file ==> True if the file is executable
Example: Suppose we want to know if the first argument (if given) match a specific string. We can use (note the space before and after '==')
#!/bin/bash if [ $1 == "console" ]; then echo 'Console' else echo 'Non-console' fi
Check if running as root
if [ $UID -ne 0 ]; then echo "Run as root" exit 1; fi
Control Structures
if
if condition then statements elif [ condition ]; then statements else statements fi
For example, we can run a cp command if two files are different.
if ! cmp -s "$filesrc" "$filecur" then cp $filesrc $filecur fi
String Comparison
http://stackoverflow.com/questions/2237080/how-to-compare-strings-in-bash
answer=no if [ -f "genome.fa" ]; then echo -n 'Do you want to continue [yes/no]: ' read answer fi if [ "$answer" == "no" ]; then echo AAA fi if [ "$answer"=="no" ]; then # failed if condition echo BBB fi
- You want the quotes around $answer, because if $answer is empty.
- Space in bash is important.
- Spaces between if and [ and ] are important
- A space before and after the double equal signs is important all. So if we reply with 'yes', the code still runs 'echo BBB' statement.
while
while condition do statements done
until
until condition do statements done
AND list
statement1 && statement2 && statement3 && ...
If command1 finishes successfully then run command2.
OR list
statement1 || statement2 || statement3 || ...
If command1 fails then run command2.
For example,
codename=$(lsb_release -s -c) if [ $codename == "rafaela" ] || [ $codename == "rosa" ]; then codename="trusty" fi
for + do + done
for variable in values do statements done
The values can be an explicit list
i=1 for day in Mon Tue Wed Thu Fri do echo "Weekday $((i++)) : $day" done
or a variable
i=1 weekdays="Mon Tue Wed Thu Fri" for day in $weekdays do echo "Weekday $((i++)) : $day" done # Output # Weekday 1 : Mon # Weekday 2 : Tue # Weekday 3 : Wed # Weekday 4 : Thu # Weekday 5 : Fri
Note that we should not put a double quotes around $weekdays variable. If we put a double quotes around $weekdays, it will prevent word splitting. See thegeekstuff article.
i=1 weekdays="Mon Tue Wed Thu Fri" for day in "$weekdays" do echo "Weekday $((i++)) : $day" done # Output # Weekday 1 : Mon Tue Wed Thu Fri
To loop over all script files in a directory
FILES=/path/to/PATTERN*.sh for f in $FILES; do ( "$f" )& done wait
OR
FILES=" file1 /path/to/file2 /path/to/file3 " for f in $FILES; do ( "$f" )& done wait
Here we run the script in the background and wait to exit until all are finished.
See loop over files from cyberciti.biz.
Example 1
To convert pdfs to tifs using ImageMagick (for looping over files, check cyberciti.biz)
outdir="../plosone" indir="../fig" if [[ ! -d $outdir ]]; then mkdir $outdir fi in=(file1.pdf file2.pdf file3.pdf) for (( i=0; i<${#in[@]} ; i++ )) do convert -strip -units PixelsPerInch -density 300 -resample 300 \ -alpha off -colorspace RGB -depth 8 -trim -bordercolor white \ -border 1% -resize '2049x2758>' -resize '980x980<' +repage \ -compress lzw $indir/${in[$i]} $outdir/Figure$[$i+1].tiff done
Example 2
A second example is to download all the (Ontario gasoline price) data with wget and parsing and concatenating the data with other *nix tools like 'sed':
# Download data for i in $(seq 1990 2014) do wget http://www.energy.gov.on.ca/fuelupload/ONTREG$i.csv done # Retain the header head -n 2 ONTREG1990.csv | sed 1d > ONTREG_merged.csv # Loop over the files and use sed to extract the relevant lines for i in $(seq 1990 2014) do tail -n 15 ONTREG$i.csv | sed 13,15d | sed 's/./-01-'$i',/4' >> ONTREG_merged.csv done
Example 3
Download all 20 sra files (60GB in total) from SRP032789.
for x in $(seq 1027175 1027180) do wget ftp://ftp-trace.ncbi.nlm.nih.gov/sra/sra-instant/reads/ByStudy/sra/SRP/SRP032/SRP032789/SRR$x/SRR$x.sra done
Example 4
Convert all files from DOS to Unix format
for f in *.txt; do tr -d '\r' < $f > tmp.txt; mv tmp.txt $f ; done # Or for file in $*; do tr -d '\r' < $f > tmp.txt; mv tmp.txt $f ; done
Example 5
Include all files in a directory
for f in /etc/*.conf do echo "$f" done
Example 6: use ping to find all the live machines on the network
for ip in 192.168.0.{1..255} ; do ping $ip -c 2 &> /dev/null ; if [ $? -eq 0 ]; then echo $ip is alive fi done
Example 7: run in parallel
for ip in 192.168.0.{1..255} ; do ( ping $ip -c2 &> /dev/null ; if [ $? -eq 0 ]; then echo $ip is alive fi )& done wait
where we enclose the loop body in ()&. () encloses a block of commands to run as a subshell and & sends it to the background. wait waits for all background jobs to complete.
Good technique !!!
- GNU parallel command
- http://unix.stackexchange.com/questions/103920/parallelize-a-bash-for-loop
- http://stackoverflow.com/questions/27934784/shell-script-to-loop-and-start-processes-in-parallel
- http://superuser.com/questions/158165/parallel-shell-loops
Functions
- http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-8.html, http://tldp.org/LDP/abs/html/functions.html
- http://www.thegeekstuff.com/2010/04/unix-bash-function-examples/
- https://www.howtoforge.com/tutorial/linux-shell-scripting-lessons-5/
#!/bin/bash fun () { echo "This is a function"; echo; } fun () { echo "This is a function"; echo } # Error! function quit { exit } function hello { echo Hello! } function e { echo $1 } $ ./e World
How to find bash shell function source code on Linux/Unix
$ type -a function_name # To list all function names $ declare -F $ declare -F | grep function_name $ declare -F | grep foo
How do I find the file where a bash function is defined?
declare -F function_name
Function arguments
source ~/bin/setpath # add bgzip & tabix directories to $PATH function raw2exon { inputvcf=$1 outputvcf=$2 inputbed=$3 if [[ $4 ]]; then oldpath=$PWD cd $4 fi bgzip -c $inputvcf > $inputvcf.gz tabix -p vcf $inputvcf.gz head -$(grep '#' $inputvcf | wc -l) $inputvcf > $outputvcf # header tabix -R $inputbed $inputvcf.gz >> $outputvcf wc -l $inputvcf wc -l $outputvcf rm $inputvcf.gz $inputvcf.gz.tbi if [[ $4 ]]; then cd $oldpath fi } inputbed=S04380110_Regions.bed raw2exon 'mu0001_raw.vcf' 'mu0001_exon.vcf' $inputbed ~/Downloads/
List of commands
break ==> escaping from an enclosing for, while or until loop : ==> null command continue ==> make the enclosing for, while or until loo continue at the next iteration . ==> executes the command in the current shell eval ==> evaluate arguments exec ==> replacing the current shell with a different program export ==> make the variable named as its parameter available in subshells expr ==> evaluate its arguments as an expression printf ==> similar to echo set ==> sets the parameter variables for the shell. Useful for using fields in commands that output spaced-separated values shift ==> moves all the parameter variables down by one. trap ==> specify the actions to take on receipt of signals. unset ==> remove variables or functions from the environment. mktemp ==> create a temporary file
set -e, set -x and trap
Exit immediately if a command exits with a non-zero status. Type help set in command line. Very useful!
See also the trap command that is related to non-zero exit.
See
bash -x
Call your script with something like
bash –x –v hello_world.sh
OR
#!/bin/bash –x -v echo Hello World!
where
- -x displays commands and their results
- -v displays everything, even comments and spaces
This is the same as using set -x in your bash script.
set -x example
Bash script
set -ex export DEBIAN_FRONTEND=noninteractive codename=$(lsb_release -s -c) if [ $codename == "rafaela" ] || [ $codename == "rosa" ]; then codename="trusty" fi echo $codename echo step 1 echo step 2 exit 0
Without -x option:
trusty step 1 step 2
With -x option:
+ export DEBIAN_FRONTEND=noninteractive + DEBIAN_FRONTEND=noninteractive ++ lsb_release -s -c + codename=rafaela + '[' rafaela == rafaela ']' + codename=trusty + echo trusty trusty + echo step 1 step 1 + echo step 2 step 2 + exit 0
trap and error handler
- http://www.computerhope.com/unix/utrap.htm
- http://linuxcommand.org/wss0160.php
- http://www.tutorialspoint.com/unix/unix-signals-traps.htm
- http://www.ibm.com/developerworks/aix/library/au-usingtraps/
- http://bash.cyberciti.biz/guide/Trap_statement
- http://steve-parker.org/sh/trap.shtml (trap with a user-defined function)
- http://www.turnkeylinux.org/blog/shell-error-handling (set -e)
- http://unix.stackexchange.com/questions/17314/what-is-signal-0-in-a-trap-command (do something on EXIT)
- http://unix.stackexchange.com/questions/79648/how-to-trigger-error-using-trap-command
The syntax to use trap command is
trap command signal
For example,
$ cat traptest.sh #!/bin/sh trap 'rm -f /tmp/tmp_file_$$' INT echo creating file /tmp/tmp_file_$$ date > /tmp/tmp_file_$$ echo 'press interrupt to interrupt ...' while [ -f /tmp/tmp_file_$$ ]; do echo file exists sleep 1 done echo the file no longer exists trap - INT echo creaing file /tmp/tmp_file_$$ date > /tmp/tmp_file_$$ echo 'press interrupt to interrupt ...' while [ -f /tmp/tmp_file_$$ ]; do echo file exists sleep 1 done echo we never get here exit 0
will get an output like
$ ./traptest.sh creating file /tmp/tmp_file_21389 press interrupt to interrupt ... file exists file exists ^Cthe file no longer exists creaing file /tmp/tmp_file_21389 press interrupt to interrupt ... file exists file exists ^C
The first when we use trap, it will delete the file when we hit Ctrl+C. The second time when we use trap, we do not specify any command to be exected when an INT signal occurs. So the default behavior occurs. That is, the final echo and exit statements are never executed.
Note that the following two are different.
trap - INT trap '' INT
The second command will IGNORE signals (Ctrl+C in this case) so if we apply this statement above, we will not be able to use Ctrl+C to kill the execution.
Bash shell find out if a command exists or not
http://www.cyberciti.biz/faq/unix-linux-shell-find-out-posixcommand-exists-or-not/
POSIX built-in commands
- command is one of bash built-in commands (alias, bind, command, declare, echo, help, let, printf, read, source, type, typeset, ulimit and unalias).
- Bash Builtin Commands and Shell Builtin Commands
- Bash source code
- What is command on bash?
- What is the difference between a builtin command and one that is not?
- Use command command to tell if a command can be found.
- Use type command to tell if a command is built-in.
# command -v will return >0 when the command1 is not found command -v command1 >/dev/null && echo "command1 Found In \$PATH" || echo "command1 Not Found in \$PATH" $ help command command: command [-pVv] command [arg ...] Execute a simple command or display information about commands. Runs COMMAND with ARGS suppressing shell function lookup, or display information about the specified COMMANDs. Can be used to invoke commands on disk when a function with the same name exists. Options: -p use a default value for PATH that is guaranteed to find all of the standard utilities -v print a description of COMMAND similar to the `type' builtin -V print a more verbose description of each COMMAND Exit Status: Returns exit status of COMMAND, or failure if COMMAND is not found. $ type command command is a shell builtin $ type export export is a shell builtin $ type wget wget is /usr/bin/wget $ type tophat -bash: type: tophat: not found $ type sleep sleep is /bin/sleep $ command -v tophat $ command -v wget /usr/bin/wget
On macOS,
$ help command command: command [-pVv] command [arg ...] Runs COMMAND with ARGS ignoring shell functions. If you have a shell function called `ls', and you wish to call the command `ls', you can say "command ls". If the -p option is given, a default value is used for PATH that is guaranteed to find all of the standard utilities. If the -V or -v option is given, a string is printed describing COMMAND. The -V option produces a more verbose description.
type -P
type -P command1 &>/dev/null && echo "Found" || echo "Not Found" $ help type type: type [-afptP] name [name ...] Display information about command type. For each NAME, indicate how it would be interpreted if used as a command name. Options: -a display all locations containing an executable named NAME; includes aliases, builtins, and functions, if and only if the `-p' option is not also used -f suppress shell function lookup -P force a PATH search for each NAME, even if it is an alias, builtin, or function, and returns the name of the disk file that would be executed -p returns either the name of the disk file that would be executed, or nothing if `type -t NAME' would not return `file'. -t output a single word which is one of `alias', `keyword', `function', `builtin', `file' or `', if NAME is an alias, shell reserved word, shell function, shell builtin, disk file, or not found, respectively Arguments: NAME Command name to be interpreted. Exit Status: Returns success if all of the NAMEs are found; fails if any are not found. typeset: typeset [-aAfFgilrtux] [-p] name[=value] ... Set variable values and attributes. Obsolete. See `help declare'.
Find all bash builtin commands
https://www.cyberciti.biz/faq/linux-unix-bash-shell-list-all-builtin-commands/
$ help $ help | less $ help | grep read
Find if a command is internal or external
$ type -a COMMAND-NAME-HERE $ type -a cd $ type -a uname $ type -a : $ command -V ls $ command -V cd $ command -V food
pause by read -p command
http://www.cyberciti.biz/tips/linux-unix-pause-command.html
read -p "Press [Enter] key to start backup..."
If we want to ask users about a yes/no question, we can use this method
while true; do read -p "Do you wish to install this program? " yn case $yn in [Yy]* ) make install; break;; [Nn]* ) exit;; * ) echo "Please answer yes or no.";; esac done
OR
echo "Do you wish to install this program?" select yn in "Yes" "No"; do case $yn in Yes ) make install; break;; No ) exit;; esac done
Keyboard input and Arithmetic
http://linuxcommand.org/wss0110.php
read
#!/bin/bash echo -n "Enter some text > " read text echo "You entered: $text"
Arithmetic
#!/bin/bash # An applications of the simple command # echo $((2+2)) # That is, when you surround an arithmetic expression with the double parentheses, # the shell will perform arithmetic evaluation. first_num=0 second_num=0 echo -n "Enter the first number --> " read first_num echo -n "Enter the second number -> " read second_num echo "first number + second number = $((first_num + second_num))" echo "first number - second number = $((first_num - second_num))" echo "first number * second number = $((first_num * second_num))" echo "first number / second number = $((first_num / second_num))" echo "first number % second number = $((first_num % second_num))" echo "first number raised to the" echo "power of the second number = $((first_num ** second_num))"
and a program that formats an arbitrary number of seconds into hours and minutes:
#!/bin/bash seconds=0 echo -n "Enter number of seconds > " read seconds # use the division operator to get the quotient hours=$((seconds / 3600)) # use the modulo operator to get the remainder seconds=$((seconds % 3600)) minutes=$((seconds / 60)) seconds=$((seconds % 60)) echo "$hours hour(s) $minutes minute(s) $seconds second(s)"
xargs
xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command (the default command is echo, located at /bin/echo) one or more times with any initial-arguments followed by items read from standard input.
Example1 - Find files named core in or below the directory /tmp and delete them
find /tmp -name core -type f -print0 | xargs -0 /bin/rm -f
where, -0 If there are blank spaces or characters (including newlines) many commands will not work. This option take cares of file names with blank space.
Another case: suppose I have a file with filename -sT. It seems not possible to delete it directly with the rm command.
$ rm "-sT" rm: invalid option -- 's' Try 'rm ./-sT' to remove the file ‘-sT’. Try 'rm --help' for more information. $ $ ls *T ls: option requires an argument -- 'T' Try 'ls --help' for more information. $ ls "*T" ls: cannot access *T: No such file or directory $ ls "*s*" ls: cannot access *s*: No such file or directory $ find . -maxdepth 1 -iname '*-sT' ./-sT $ find . -maxdepth 1 -iname '*-sT' | xargs -0 /bin/rm -f $ find . -maxdepth 1 -iname '*-sT' | xargs /bin/rm -f # WORKS
Similarly, suppose I have a file of zero size. The file name is "-f3". I cannot delete it.
$ ls -lt total 448 -rw-r--r-- 1 mingc mingc 0 Jan 16 11:35 -f3 $ rm -f3 rm: invalid option -- '3' Try `rm ./-f3' to remove the file `-f3'. Try `rm --help' for more information. $ find . -size 0 -print0 |xargs -0 rm
Example2 - Find files from the grep coammand and sort them by date
grep -l "Polyphen" tmp/*.* | xargs ls -lt
Example3 - Gzip with multiple jobs
CORES=$(grep -c '^processor' /proc/cpuinfo) find /source -type f -print0 | xargs -0 -n 1 -P $CORES gzip -9
where
- find -print0 / xargs -0 protects you from whitespace in filenames
- xargs -n 1 means one gzip process per file
- xargs -P specifies the number of jobs
- gzip -9 means maximum compression
GNU Parallel
- http://www.gnu.org/software/parallel/
- https://www.gnu.org/software/parallel/parallel_tutorial.html
- https://www.biostars.org/p/63816/
- https://biowize.wordpress.com/2015/03/23/task-automation-with-bash-and-parallel/
- http://www.shakthimaan.com/posts/2014/11/27/gnu-parallel/news.html
- https://www.msi.umn.edu/support/faq/how-can-i-use-gnu-parallel-run-lot-commands-parallel
- http://deepdish.io/2014/09/15/gnu-parallel/
- http://davetang.org/muse/2013/11/18/using-gnu-parallel/
- https://vimeo.com/20838834, https://youtu.be/OpaiGYxkSuQ
A simple trick without using GNU Parallel is run the commands in background.
Example: same command, different command line argument
Input from the command line:
parallel echo ::: A B C
Input from a file:
parallel -a abc-file echo
Input is a STDIN:
cat abc-file | parallel echo find . -iname "*after*" | parallel wc -l
Another similar example is to gzip each individual files
parallel gzip --best ::: *.html
Example: each command containing an index
Instead of
for i in $(seq 1 100) do someCommand data$i.fastq > output$i.txt & done
, we can use
parallel --jobs 16 someCommand data{}.fastq '>' output{}.txt ::: {1..100}
Example: each command not containing an index
for i in *gz; do zcat $i > $(basename $i .gz).unpacked done
can be written as
parallel 'zcat {} > {.}.unpacked' ::: *.gz
Example: run several subscripts from a master script
Suppose I have a bunch of script files: script1.sh, script2.sh, ... And an optional master script (file ext does not end with .sh). My goal is to run them using GNU Parallel.
I can just run them using
parallel './{}' ::: *.sh
where "./" means the .sh files are located in the current directory and {} denotes each individual .sh file.
More detail:
$ mkdir test-par; cd test-par $ echo echo A > script1.sh $ echo echo B > script2.sh $ echo echo C > script3.sh $ echo echo D > script4.sh $ chmod +x *.sh $ cat > script # master script (not needed for GNU parallel method) ./script1.sh ./script2.sh ./script3.sh ./script4.sh $ time bash script A B C D real 0m0.025s user 0m0.004s sys 0m0.004s $ time parallel './{}' ::: *.sh # No need of a master script # may need to add --gnu option if asked. A B C D real 0m0.778s user 0m0.588s sys 0m0.144s # longer time because of the parallel overhead
Note
- When I run scripts (seqtools_vc) sequentially I can get the standard output on screen. However, I may not get these output when I use GNU parallel.
- There is a risk/problem if all scripts are trying to generate required/missing files when they detect the required files are absent.
Debugging Scripts
- How To Enable Shell Script Debugging Mode in Linux (very good) Some options (note options can be used in 1. the set command 2. the first line of the shell file or 3. the terminal where the shell is invoked)
- -e: exit if a command yields a nonzero exit status
- -v: short for verbose
- -n: short for noexec or no ecxecution
- -x: short for xtrace or execution trace
- How to Trace Execution of Commands in Shell Script with Shell Tracing
- How to Perform Syntax Checking Debugging Mode in Shell Scripts
- http://www.cyberciti.biz/tips/debugging-shell-script.html
Run a shell script with -x option. Then each lines of the script will be shown on the stdout. We can see which line takes long time or which lines broke the code (it still runs through the script).
$ bash -x script-name
- Use of set builtin command
- Use of intelligent DEBUG function
To run a bash script line by line:
- Bash Debugger
- Use Geany. See the next session.
Geany
- (Ubuntu 12.04 only): By default, it does not have the terminal tab. Install virtual terminal emulator. Run
sudo apt-get install libvte-dev
- Step 1: Keyboard shortcut. Select a region of code. Edit -> >Commands->Send selection to Terminal. You can also assign a keybinding for this. To do so: go to Edit->Preferences and pick the Keybindings tab. See a screenshot here. I assign F12 (no any quote) for the shortcut. This is a complete list of the keybindings.
- Step 2: Newline character. Another issue is that the last line of sent code does not have a newline character. So I need to switch to the Terminal and press Enter. The solution is to modify the <geany.conf> (find its location using locate geany.conf. On my ubuntu 14 (geany 1.26), it is under ~/.config/geany/geany.conf) and set send_selection_unsafe=true. See here.
- Step 3: PATH variable.
$ tmpname=$(basename $inputVCF) Command 'basename' is available in '/usr/bin/basename' The command could not be located because '/usr/bin' is not included in the PATH environment variable.
The solution is to run PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin in the Terminal window before running our script.
- Step 4 (optional): Change background color.
Another handy change to geany is to change its background to black. To do that, go to Edit -> Preferences -> Editor. Once on the Editor options level, select the Display tab to the far right of the dialog, and you will notice a checkbox marked invert syntax highlighting colors.
See this post about changing the default terminal in the Terminal window. The default is xterm (see the output of echo $TERM).
Examples
- <upgrade8.sh> file from BioLinux installation page
- Install required R packages using a mixture of bash and R.
How to wrap a long linux command
Use backslash character. However, make sure the backslash character is the last character at a line. For example the first example below does not work since there is an extra space character after \.
Example 1 (not work)
sudo apt-get install libcap-dev libbz2-dev libgcrypt11-dev libpci-dev libnss3-dev libxcursor-dev \ libxcomposite-dev libxdamage-dev libxrandr-dev libdrm-dev libfontconfig1-dev libxtst-dev \ libcups2-dev libpulse-dev libudev-dev
vs example 2 (work)
sudo apt-get install libcap-dev libbz2-dev libgcrypt11-dev libpci-dev libnss3-dev libxcursor-dev \ libxcomposite-dev libxdamage-dev libxrandr-dev libdrm-dev libfontconfig1-dev libxtst-dev \ libcups2-dev libpulse-dev libudev-dev
pushd and popd are used to switch between multiple directories without the copying nad posting of directory paths. Thy operate on a stack; a last in first out data structure (LIFO).
pushd /var/www pushd /usr/src dirs pushd +2 popd
When we have only two locations, an alternative and easier way is cd -.
cd /usr/src # Do something cd /var/www cd - # /usr/src
bd – Quickly Go Back to a Parent Directory
- https://www.tecmint.com/bd-quickly-go-back-to-a-linux-parent-directory/
- https://raw.github.com/vigneshwaranr/bd/master/bd
Create log file
- Create a log file with date
logfile="output_$(date +"%Y%m%d%H%M").log"
- Redirect the error to a log file
logfile="output_$(date +"%Y%m%d%H%M").log" module load XXX || exit 1 echo "All output redirected to '$logfile'" set -ex exec 2>$logfile # Task 1 start_time=$(date +%s) # Do something with possible error output end_time=$(date +%s) echo "Task 1 Started: tarted: "$start_date"; Ended: "$end_date"; Elapsed time: "$(($end_time - $start_time))" sec">>$logfile # Task 2 start_time=$(date +%s) # Do something with possible error output end_time=$(date +%s) echo "Task 1 Started: tarted: "$start_date"; Ended: "$end_date"; Elapsed time: "$(($end_time - $start_time))" sec">>$logfile
Text processing
tr (similar to sed)
It seems tr does not take general regular expression.
The tr utility copies the given input to produced the output with substitution or deletion of selected characters. tr abbreviated as translate or transliterate.
- http://www.thegeekstuff.com/2012/12/linux-tr-command/
- http://www.cyberciti.biz/faq/how-to-use-linux-unix-tr-command/
It will read from STDIN and write to STDOUT. The syntax is
tr [OPTION] SET1 [SET2]
If both the SET1 and SET2 are specified and ‘-d’ OPTION is not specified, then tr command will replace each characters in SET1 with each character in same position in SET2. For example,
# translate to uppercase $ echo 'linux' | tr "[:lower:]" "[:upper:]" # Translate braces into parenthesis $ tr '{}' '()' < inputfile > outputfile # Replace comma with line break $ tr ',' '\n' < inputfile # Translate white-space to tabs $ echo "This is for testing" | tr [:space:] '\t' # Join/merge all the lines in a file into a single line $ tr -s '\n' ' ' < file.txt # note sed cannot match \n easily as tr command. # See # http://stackoverflow.com/questions/1251999/how-can-i-replace-a-newline-n-using-sed # https://unix.stackexchange.com/questions/26788/using-sed-to-convert-newlines-into-spaces
tr can also be used to remove particular characters using -d option. For example,
$ echo "the geek stuff" | tr -d 't' he geek suff
A practical example
#!/bin/bash echo -n "Enter file name : " read myfile echo -n "Are you sure ( yes or no ) ? " read confirmation confirmation="$(echo ${confirmation} | tr 'A-Z' 'a-z')" if [ "$confirmation" == "yes" ]; then [ -f $myfile ] && /bin/rm $myfile || echo "Error - file $myfile not found" else : # do nothing fi
Second example
$ ifconfig | cut -c-10 | tr -d ' ' | tr -s '\n' eth0 eth1 ip6tnl0 lo sit0 # without tr -s '\n' eth0 eth1 ip6tnl0 lo sit0
where tr -d ' ' deletes every space character in each line. The \n newline character is squeezed using tr -s '\n' to produce a list of interface names. We use cut to extract the first 10 characters of each line.
Regular Expression
- A summary table
- https://regexper.com/ You can type for example '[a-z]*.[0-9]' to see what it is doing.
- ( ?[a-zA-Z]+ ?) match all words in a given text
- [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3} match an IP address
- 15 Practical Grep Command Examples In Linux
- Period means a single character. Using Grep & Regular Expressions to Search for Text Patterns in Linux
- Linux command line: grep PATTERN FILENAME or grep -E PATTERN FILENAME (extended regular expression)
echo -e "today is Monday\nHow are you" | grep Monday grep -E "[a-z]+" filename # or egrep "[a-z]+" filename grep -i PATTERN FILENAME # ignore case grep -v PATTERN FILENAME # inverse match grep -c PATTERN FILENAME # count the number of lines in which a matching string appears grep -n PATTERN FILENAME # print the line number grep -R PATTERN DIR # recursively search many files grep -r PATTERN DIR # recursively search many files grep -e "pattern1" -e "pattern2" FILENAME # multiple patterns grep -f PATTERNFILE FILENAME # PATTERNFILE contains patterns line-by-line grep -F PATTERN FILENAME # Interpret PATTERN as a list of fixed strings, separated by # newlines, any of which is to be matched. grep -r --include *.{c,cpp} PATTERN DIR # including files in which to search grep -r --exclude "README" PATTERN DIR # excluding files in which to search grep -o \<dt\>.*<\/dt\> FILENAME # print only the matched string (<dt> .... </dt>) grep -w # checking for full words, not for sub-strings grep -E -w "SRR2923335.1|SRR2923335.1999" # match in words (either SRR2923335.1 or SRR2923335.1999)
- Extract the IP address from ifconfig command
$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr 00:14:d1:b0:df:9f inet addr:192.168.1.172 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::214:d1ff:feb0:df9f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:29113 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:28561660 (28.5 MB) TX bytes:3516957 (3.5 MB) $ ifconfig eth1 | egrep -o "inet addr:[^ ]*" | grep -o "[0-9.]*" 192.168.1.172
where egrep -o "inet addr:[^ ]*" will match the pattern starting with inet addr: and ends with some non-space character sequence (specified by [^ ]*). Now in the next pipe, it prints the character combination of digits and '.'.
cut: extract columns or fields from text files
http://www.thegeekstuff.com/2013/06/cut-command-examples/
To extract fixed columns (say columns 5-7 of a file):
cut -c5-7 somefile
If the field delimiter is different from TAB you need to specify it using -d:
cut -d' ' -f100-105 myfile > outfile # cut -d: -f6 somefile # colon-delimited file # grep "/bin/bash" /etc/passwd | cut -d':' -f1-4,6,7 # field 1 through 4, 6 and 7 cut -f3 --complement somefile # print all the columns except the third column
To specify the output delimiter, we shall use --output-delimiter. NOTE that to specify the Tab delimiter in cut, we shall use $'\t'. See http://www.computerhope.com/unix/ucut.htm. For example,
cut -f 1,3 -d ':' --output-delimiter=$'\t' somefile
If I am not sure about the number of the final field, I can leave the number off.
cut -f 1- -d ':' --output-delimiter=$'\t' somefile
awk: operate on rows and/or columns
awk is a tool designed to work with data streams. It can operate on columns and rows. If supports many built-in functionalities, such as arrays and functions, in the C programming language. Its biggest advantage is its flexibility.
- https://en.wikipedia.org/wiki/AWK
- https://www.tutorialspoint.com/awk/awk_workflow.htm
- http://www.thegeekstuff.com/2010/01/awk-introduction-tutorial-7-awk-print-examples
- http://www.theunixschool.com/p/awk-sed.html
- http://www.grymoire.com/Unix/Awk.html
Structure of an awk script
awk pattern { action } awk ' BEGIN{ print "start" } pattern { AWK commands } END { print "end" } ' file
The three of components (BEGIN, END and a common statements block with the pattern match option) are optional and any of them can be absent in the script. The pattern can be also called a condition.
The default delimiter for fields is a space.
Some examples:
awk 'BEGIN { i=0 } { i++ } END { print i}' filename echo -e "line1\nline2" | awk 'BEGIN { print "start" } { print } END { print "End" }' seq 5 | awk 'BEGIN { sum=0; print "Summation:" } { print $1"+"; sum+=$1 } END { print "=="; print sum }' awk -F : '{print $6}' somefile # colon-delimited file, print the 6th field (cut can do it) # awk --field-searator="\\t" '{print $6}' filename # tab-delimited (cut can do it) awk -F":" '{ print $1 " " $3 }' /etc/passwd # (cut can do it) awk -F "\t" '{OFS="\t"} {$1="mouse"$1; print $0}' genes.gtf > genescb.gtf # or awk -F "\t" 'BEGIN {OFS="\t"} {$1="mouse"$1; print $0}' genes.gtf > genescb.gtf # replace ELEMENT with mouseELEMENT for data on the 1st column; tab separator was used for input (-F) and output (OFS) awk 'NR % 4 == 1 {print ">" $0 } NR % 4 == 2 {print $0}' input > output # extract rows 1,2,5,6,9,10,13,14,.... from input awk 'NR % 4 == 0 {print ">" $0 } NR % 4 == 3 {print $0}' input > output # extract rows 3,4,7,8,11,12,15,16,.... from input awk '(NR==2),(NR==4) {print $0}' input # print rows 2-4. awk '{ print ($1-32)*(5/9) }' # fahrenheit-to-celsius calculator, http://www.hcs.harvard.edu/~dholland/computers/awk.html # http://stackoverflow.com/questions/3700957/printing-lines-from-a-file-where-a-specific-field-does-not-start-with-something awk '$7 !~ /^mouse/ { print $0 }' input # column 7 not starting with 'mouse' awk '$7 ~ /^mouse/ { print $0 }' input # column 7 starting with 'mouse' awk '$7 ~ /mouse/ { print $0 }' input # column 7 containing 'mouse'
It seems AWK is useful for finding/counting a subset of rows or columns. It is not most used for string substitution.
Print the string between two parentheses
https://unix.stackexchange.com/questions/108250/print-the-string-between-two-parentheses
$ awk -F"[()]" '{print $2}' file $ echo ">gi|52546690|ref|NM_001005239.1| subfamily H, member 1 (OR11H1), mRNA" | awk -F"[()]" '{print $2}' OR11H1 $ echo ">gi|284172348|ref|NM_002668.2| proteolipid protein 2 (colonic epithelium-enriched) (PLP2), mRNA" | awk -F"[()]" '{print $2}' colonic epithelium-enriched # WRONG
sed (stream editor): substitution of text
By default, sed only prints the substituted text. To save the changes along the substitutions to the same file, use the -i option.
sed 's/text/replace/' file > newfile mv newfile file # OR better sed -i 's/text/replace/' file
The sed command will replace the first occurrence of the pattern in each line. If we want to replace every occurrence, we need to add the g parameter at the end, as follows:
sed -i 's/pattern/replace/g' file
To remove blank lines
sed '/^$/d' filename
# method 1. replace ] & [ by the empty string $ echo '00[123]44' | sed 's/[][]//g' 0012344 # method 2 - use tr $ echo '00[123]00' | tr -d '[]' 0012300
To replace all three-digit numbers with another specified word in a file
sed -i 's/\b[0-9]\{3\}\b/NUMBER/g' filename echo -e "I love 111 but not 1111." | sed 's/\b[0-9]\{3\}\b/NUMBER/g'
where {3} is used for matching the preceding character thrice. \ in \{3\} is used to give a special meaning for { and }. \b is the word boundary marker.
Variable string and quoting
text=hello echo hello world | sed "s/$text/HELLO/"
Double quoting expand the expression by evaluating it.
Application: Get the top directory name of a tarball or zip file without extract it
dn=`unzip -vl filename.zip | sed -n '5p' | awk '{print $8}'` # 5 is the line number to print echo -e "$(basename $dn)" dn=`tar -tf filename.tar.bz2 | grep -o '^[^/]\+' | sort -u` echo -e $dn dn=`tar -tf filename.tar.gz | grep -o '^[^/]\+' | sort -u` echo -e $dn # Assume there is a sub-directory called htslibXXXX dn=$(basename `find -maxdepth 1 -name 'htslib*'`) echo -e $dn
Application: Grab the line number from the 'grep -n' command output
Follow here
grep -n 'regex' filename | sed 's/^\([0-9]\+\):.*$/\1/' # return line numbers for each matches # OR grep -n 'regex' filename | awk -F: '{print $1}' echo 123:ABCD | sed 's/^\([0-9]\+\):.*$/\1/' # 123
where \1 means to keep the substring of the pattern and \( & \) are used to mark the pattern. See http://www.grymoire.com/Unix/Sed.html for more examples, e.g. search repeating words or special patterns.
If we want to find the to directory for a zipped file (see wikipedia for the zip format), we can use
unzip -vl snpEff.zip | head | grep -n 'CRC-32' | awk -F: '{print $1}'
Application: Delete first few characters on each row
http://www.theunixschool.com/2014/08/sed-examples-remove-delete-chars-from-line-file.html
- To remove 1st n characters of every line:
# delete the first 4 characters from each line $ sed -r 's/.{4}//' file
Substitution of text: perl
- Add or remove 'chr' from vcf file https://www.biostars.org/p/18530/
How to delete the first few rows of a text file
Suppose we want to remove the first 3 rows of a text file
- sed
$ sed -e '1,3d' < t.txt # output to screen $ sed -i -e 1,3d yourfile # directly change the file
- tail
$ tail -n +4 t.txt # output to screen
- awk
$ awk 'NR > 3 { print }' < t.txt # output to screen
Show the first few characters from a text file
head -c 50 file # return the first 50 bytes
cat: merge by rows
cat file1 file2 > output
paste: merge by columns
paste -d"\t" file1 file2 file3 > output paste file1 file2 file3 | column -s $'\t' > output
Web
Reference: Linux Shell Scripting Cookbook
Copy a complete webiste
wget --mirror --convert-links URL # OR wget -r -N -k -l DEPTH URL
HTTP or FTP authentication
wget --user username --password pass URL
Download a web page as plain text (instead of HTML text)
lynx URL -dump > TextWebPage.txt
cURL
curl http://google.com -o index.html --progress curl http://google.com --silent -o index.html # Cookies curl http://example.com --cookie "user=ABCD;pass=EFGH" curl URL --cookie-jar cookie_file # Setting a user agent string # http://www.useragentstring.com/pages/useragentstring.php curl URL --user-agent "Mozilla/5.0" # Authenticating curl -u user:pass http://test_auth.com curl -u user http://test_auth.com # Printing response headers excluding the data # For example, to check whether a page is reachable or not # by checking the 'Content-length' parameter. curl -I URL
Image crawler and downloader
#!/bin/bash #Desc: Images downloader #Filename: img_downloader.sh if [ $# -ne 3 ]; then echo "Usage: $0 URL -d DIRECTORY" exit -1 fi for i in {1..4} do case $1 in -d) shift; directory=$1; shift ;; *) url=${url:-$1}; shift;; esac done mkdir -p $directory; baseurl=$(echo $url | egrep -o "https?://[a-z.]+") echo Downloading $url curl -s $url | egrep -o "<img src=[^>]*>" | sed 's/<img src=\"\([^"]*\).*/\1/g' > /tmp/$$.list sed -i "s|^/|$baseurl/|" /tmp/$$.list cd $directory; while read filename; do echo Downloading $filename curl -s -O "$filename" --silent done < /tmp/$$.list
Find broken links in a website by lynx -traversal
#!/bin/bash #Desc: Find broken links in a website if [ $# -ne 1 ]; then echo -e "$Usage: $0 URL\n" exit 1; fi echo Broken links: mkdir /tmp/$$.lynx cd /tmp/$$.lynx lynx -traversal $1 > /dev/null count=0; sort -u reject.dat > links.txt while read link; do output=`curl -I $link -s | grep "HTTP/.*OK"`; if [[ -z $output ]]; then echo $link; let count++ fi done < links.txt [ $count -eq 0 ] && echo No broken links found.
Track changes to a website
#!/bin/bash #Desc: Script to track changes to webpage if [ $# -ne 1 ]; then echo -e "$Usage: $0 URL\n" exit 1; fi first_time=0 # Not first time if [ ! -e "last.html" ]; then first_time=1 # Set it is first time run fi curl --silent $1 -o recent.html if [ $first_time -ne 1 ]; then changes=$(diff -u last.html recent.html) if [ -n "$changes" ]; then echo -e "Changes:\n" echo "$changes" else echo -e "\nWebsite has no changes" fi else echo "[First run] Archiving.." fi cp recent.html last.html
POST/GET
Look at a web site source and look for the 'name' field in a <input> tag.
http://www.w3schools.com/html/html_forms.asp
# -d is used for posting in curl curl URL -d "postvar1=var1&postvar2=var2" # OR the 'get' command with the 'post-data' option get URL --post-data "postvar1=var1&postvar2=var2" -O out.html
Change detection of a website
- http://bhfsteve.blogspot.com/2013/03/monitoring-web-page-for-changes-using.html
- https://www.reddit.com/r/commandline/comments/2e2bkj/linux_software_to_monitor_website_changes/
- http://specto.sourceforge.net/ and https://www.linux.com/news/monitor-web-page-changes-specto
- http://www.mostlymaths.net/2010/01/cron-diff-wget-watch-changes-in-webpage.html
Working with Files
iconv command
- How to Convert Files to UTF-8 Encoding in Linux
- https://stackoverflow.com/questions/11316986/how-to-convert-iso8859-15-to-utf8
$ file test.R test.R: ISO-8859 text, with CRLF line terminators $ iconv -f ISO-8859 -t UTF-8 test.R # 'ISO-8859' is not supported $ iconv -t UTF-8 test.R # partial conversion?? $ iconv -f ISO-8859-1 -T UTF-8 test.R # Works
nl command
Add line numbers to a text file
$ cat demo_file THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE. this line is the 1st lower case line in this file. This Line Has All Its First Character Of The Word With Upper Case. Two lines above this line is empty. And this is the last line. $ nl demo_file 1 THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE. 2 this line is the 1st lower case line in this file. 3 This Line Has All Its First Character Of The Word With Upper Case. 4 Two lines above this line is empty. 5 And this is the last line.
file command
$ file thumbs/g7.jpg thumbs/g7.jpg: JPEG image data, JFIF standard 1.01, resolution (DPI), density 72x72, segment length 16, Exif Standard: [TIFF image data, little-endian, direntries=10, orientation=upper-left, xresolution=134, yresolution=142, resolutionunit=2, software=Adobe Photoshop CS Windows, datetime=2004:03:31 22:28:58], baseline, precision 8, 100x75, frames 3 $ file index.html index.html: HTML document, ASCII text $ file 2742OS_5_01.sh 2742OS_5_01.sh: Bourne-Again shell script, ASCII text executable $ file R-3.2.3.tar.gz R-3.2.3.tar.gz: gzip compressed data, last modified: Thu Dec 10 03:12:50 2015, from Unix
print by skipping rows
http://stackoverflow.com/questions/604864/print-a-file-skipping-x-lines-in-bash
$ tail -n +<N+1> <filename> # excluding first N lines # print by starting at line N+1. $ tail -n +11 /tmp/myfile # starting at line 11, or skipping the first 10 lines
tail -f (follow)
When we use the '-f' (follow) option, we can monitor a growing file. For example, we can create a new file called tmp.txt and run 'tail -f tmp.txt'. Now we open another terminal and run 'for i in {0..100}; do sleep 2; echo $i >> ~/output.txt ; done'. We will see in the 1st terminal that the content of tmp.txt is changed.
A practical example is
- Monitor system change
sudo tail -f /var/log/syslog
- Monitor a process and terminate itself when a give process dies
PID=$(pidof Foo) tail -f textfile --pid $PID
A process Foo (eg. gedit) is appending data to a file, the tail -f should be executed until the process Foo dies.
Low-level File Access
- file descriptors: 0 means standard input, 1 means standard output, 2 means standard error.
- size_t write(int fildes, const void *buf, size_t nbytes);
#include <unistd.h> #include <stdlib.h> int main() { if ((write(1, "Here is some data\n", 18)) != 17) write(2, "A write error has occurred on file descriptor\n", 46); exit(0); }
- size_t read(int fildes, void *buf, size_t nbytes); returns the number of data bytes actually read. If a read call returns 0, it had nothing to read; it reached the end of the file. An error on the call will cause it to return -1.
- To create a new file descriptor we use the open system call. int open(const char *path, int oflags, mode_t mode);
- The next program will do file copy.
#include <unistd.h> #include <sys/stat.h> #include <fcntl.h> #include <stdlib.h> int main() { char c; int in, out; in = open("file.in", O_RDONLY); out = open("file.out", O_WRONLY|O_CREAT, S_IRUSER|S_IWUSR); while(read(in,&c,1) == 1) write(out,&c,1) exit(0); }
The Standard I/O Library
- fopen, fclose
- fread, fwrite
- fflush
- fseek
- fgetc, getc, getchar
- fputc, putc, putchar
- fgets, gets
- printf, fprintf and sprintf
- scanf, fscanf and sscanf
Formatted Input and Output
- prinf, fprintf and sprintf
- scanf, fscanf and sscanf
Stream Errors
File and Directory Maintenance
Scanning Directories
- opendir, closedir
- readdir
- telldir
- seekdir
UNIX environment
Logging
Resources and Limits
Terminals
Reading from and Writing to the Terminal
The termios Structure
Terminal Output
Detecting Keystokes
Curses
A technique between command line and full GUI.
Example: vi.
Data Management
Development Tools
GNU Make and Makefiles
- minimal make A minimal tutorial on make from Karl Broman.
- http://makefiletutorial.com/index.html
- Notes for new Make users
Writing a Manual Page
Distributing Software
The patch Program
Debugging
debug a bash shell
How To Debug a Bash Shell Script Under Linux or UNIX
gdb
Processes and Signals
Search a process ID by its name
Use pgrep https://askubuntu.com/questions/612315/how-do-i-search-for-a-process-by-name-without-using-grep. For example (tested on Linux and macOS),
$ pgrep RStudio # assume RStudio is running 27043 $ pgrep geany # geany is not running. $