Scripts in generally and bash in particular fill an enormous amount of my time. The ability to create scripts that can handle a number of diverse inputs is directly related to how flexible and robust the code-base is. The most common problem when handling files in Bash is the problem with spaces. Linux is both case sensitive and handles spaces with less… grace… than some OSes. Bash suffers from these same issues. The easiest way to handle this is with the IFS system variable. IFS is simply the field delimiter for Linux (i.e. white space) and, because it is a modifiable system variable, you set it to something that you will not run into. For example:
#!/bin/bash
KEEPOLDVALUE=$IFS
IFS=$(echo -en “\n\b”)
for var in *
do
# Do something with each line of a file
echo “$var”
done
IFS=$KEEPOLDVALUE
That will solve the problem dealing with spaces when developing simple/basic scripts that are written for quick and dirty system management. That said, when you are building scripts to use regularly you will need to do be a more comprehensive when testing your script.
A good place to start is by setting -u. Whenever testing new scripts, try running them without any arguments but WITH -u. If you fail to correctly initialize your variables running them with -u will warn you that there is a problem. For example:
$bash -u /tmp/mynewtestscript.sh
/tmp/mynewtestscript.sh: line 34: $DIRNAME: unbound variable
We can then verify that we have (at the very least) correctly initialized any variables that we will use and reduce the probability of side-effects.
A problem I ran into a lot with my early script creation was that I often needed standard output from one command to be sent to another command as command line input (as opposed to standard input.) The best way to solve this problem with using the bash built-in command execution form, for example:
echo $(ls)
But this isn’t always very elegant to implement directly, so another options is the wonderful xargs command. xargs breaks the output of one command into individual arguments that it feeds to another command. This allows you to use standard piping between otherwise un-pipeable commands. For example:
ls | xargs echo
Sometime joining two vars can be complicated when those var names need characters between them. To solve this you can you can use parameter substitution. What this means, effectively, is that the var $tempvar and ${tempvar} are the same thing. This allows you to combine variables with in-between characters without concern.
_a=”test”
_b=”/file”
newvar=${_a}folder${_b}
Another useful tip (found via this article from hacktux) is the mktemp executables for temporary file creation. Needing a temp file to store intermediate data try the following:
tempfile=$(/bin/mktemp)
tempdir=$(/bin/mktemp -d)
Another common problem for bash scripts being used for administration is that they need to be run as root (or sudo root on Ubuntu systems.) The way to solve this is to check the EUID environmental variable. Root will always be 0 for EUID and you can put a simple check at the beginning of your script with the following:
if [[ $EUID -ne 0 ]]; then
echo “This script must be run as root” 1>&2
exit 1
fi
Need a random number of characters for your bash script? Use dd and /dev/random to get a variable number of characters. For example:
random=”$(dd if=/dev/urandom bs=3 count=1)”
Will give you three random characters (stored in $random) out of urandom current entropy pool. Unfortunately the character set is likely to be UTF-32 giving you a bunch of ?? symbols. To convert those to base64 encoding just pipe the output through base64 (the conversion process may likely give you more than 3 characters to be sure to regexp to the number of characters you need):
random=”$(dd if=/dev/urandom bs=3 count=1 | base64)”