But what’s my motivation?

Scripts in generally and bash in particular fill an enormous amount of my time.  The ability to create scripts that can handle a number of diverse inputs is directly related to how flexible and robust the code-base is.  The most common problem when handling files in Bash is the problem with spaces.  Linux is both case sensitive and handles spaces with less… grace… than some OSes.  Bash suffers from these same issues.  The easiest way to handle this is with the IFS system variable.  IFS is simply the field delimiter for Linux (i.e. white space) and, because it is a modifiable system variable, you set it to something that you will not run into.  For example:

#!/bin/bash
KEEPOLDVALUE=$IFS
IFS=$(echo -en “\n\b”)
for var in *
do
# Do something with each line of a file
echo “$var”
done
IFS=$KEEPOLDVALUE

That will solve the problem dealing with spaces when developing simple/basic scripts that are written for quick and dirty system management.  That said, when you are building scripts to use regularly you will need to do be a more comprehensive when testing your script.

A good place to start is by setting -u. Whenever testing new scripts, try running them without any arguments but WITH -u. If you fail to correctly initialize your variables running them with -u will warn you that there is a problem. For example:

$bash -u /tmp/mynewtestscript.sh
/tmp/mynewtestscript.sh: line 34: $DIRNAME: unbound variable

We can then verify that we have (at the very least) correctly initialized any variables that we will use and reduce the probability of side-effects.

A problem I ran into a lot with my early script creation was that I often needed standard output from one command to be sent to another command as command line input (as opposed to standard input.) The best way to solve this problem with using the bash built-in command execution form, for example:

echo $(ls)

But this isn’t always very elegant to implement directly, so another options is the wonderful xargs command.  xargs breaks the output of one command into individual arguments that it feeds to another command.  This allows you to use standard piping between otherwise un-pipeable commands.  For example:

ls | xargs echo

Sometime joining two vars can be complicated when those var names need characters between them.  To solve this you can you can use parameter substitution.  What this means, effectively, is that the var $tempvar and ${tempvar} are the same thing.  This allows you to combine variables with in-between characters without concern.

_a=”test”
_b=”/file”
newvar=${_a}folder${_b}

Another useful tip (found via this article from hacktux) is the mktemp executables for temporary file creation.  Needing a temp file to store intermediate data try the following:

tempfile=$(/bin/mktemp)
tempdir=$(/bin/mktemp -d)

Another common problem for bash scripts being used for administration is that they need to be run as root (or sudo root on Ubuntu systems.)  The way to solve this is to check the EUID environmental variable.  Root will always be 0 for EUID and you can put a simple check at the beginning of your script with the following:

if [[ $EUID -ne 0 ]]; then
echo “This script must be run as root” 1>&2
exit 1
fi

Need a random number of characters for your bash script?  Use dd and /dev/random to get a variable number of characters.  For example:

random=”$(dd if=/dev/urandom bs=3 count=1)”

Will give you three random characters (stored in $random) out of urandom current entropy pool.  Unfortunately the character  set is likely to be UTF-32 giving you a bunch of ?? symbols.  To convert those to base64 encoding just pipe the output through base64 (the conversion process may likely give you more than 3 characters to be sure to regexp to the number of characters you need):

random=”$(dd if=/dev/urandom bs=3 count=1 | base64)”

Starting Dropbox

My brother wanted a quick explanation on how to create an executable to start Dropbox. While I was helping him he was kind enough to mock my freakishly awesome IBM Model M Unicomp keyboard… the greatest keyboard in the world. This setup is designed to work the the local tar.gz install of Dropbox on Linux and NOT the rpm based install (that requires Gnome for the file manager.)

Create a new file in your ~/bin directory called startdropbox.sh with the following content

#!/bin/bash
~/.dropbox-dist/dropboxd &

After you have saved the file make the file executable by typeing

chmod 755 ~/bin/startdropbox.sh

Now you can start up dropbox by clicking on that icon at any time.

AND I LOVE MY CLICKY KEYBOARD BITCHES!!!

Killing Me Softly

Not sure how it was possible that I have not done this before but I recently realized I needed to forcibly remove a user from a login session on a remote Linux system and didn’t have a better idea than simply killing off all their individual system processes one at a time.  Thankfully, Linux provides a much more useful way of dealing with kicking users from terminal sessions (and thereby shutting down their entire process tree as well.)

$who -u

Will give you a list of user sessions based on which terminal they are logged into.  This includes X sessions, virtual terminals, remote sessions, and any text mode logins.  The output should looks something like this:

bobby    :0              2011-04-21 20:01   ?            12122
bobby    pts/0        2011-04-21 20:01   .             12405 (:0)
bobby    pts/1        2011-04-21 20:01 02:10       12322 (:0)
root        pts/2       2011-04-21 22:19 .                13887 (10.0.0.101)

You can then kill the session login by looking at the last column and killing that process ID.  In the example above you can see there are two virtual terminals (i.e. the pts/X sessions), a remote session (the ssh session I am remotely accessing the machine on from host 10.0.0.101), and a single local login on terminal session :0.  Because the terminal session must be responsible for starting the virtual terminals, you can simply kill the process 12122 force a logout of all three sessions.

$kill 12122

Entirely too easy.  If you would like to be kind (I am NOT) and actually warn your users that you are bout to kick them off, you can send them a system message using the standard Unix wall command.  If you type wall you will get an open text area to type your message (end the message by clicking Ctrl+d) or you can pipe a message to standard input like so:

$echo “My name is Inigo Montoya. You killed my father. Prepare to die.” |wall

Wall will send a system message to every terminal session that allows messages (if you are root, that means everybody.)

scientia potentia est

What most people are objecting to is that the market gives people what the people want instead of what the person talking thinks the people aught to want.
–Milton Friedman

Milton Friedman is easily the most influential economist since John Maynard Keynes. What makes him such a powerful voice for the free market is his ability to distill complex macro economic theory into chunks non-economists can easily understand.  He is so influential, and understandable, that PBS actually produced a series with him explaining economic ideas and debating these thoughts with other prominent scholars, politicians, and businessmen. The series was called “Free To Choose.”

Unfortunately most of us don’t have hours worth of time to watch all the episodes (although you should.)  To get a quick overview of each of his core concepts Trent Liberty has produced a series of 7 videos called The Friedman Series.  The background audio ranges from amazing to annoying, but the topic selection is outstanding.  If you get nothing else from the video, always remember that the biggest danger to liberty isn’t inequality, but the sincerity of the well intentioned.

To Any Place Worth Going

One of the best parts of Unix systems is that fundamentally they are built as development platforms.  The most common text command interface for Unix is call Bash (the Bourne Again Shell)and it is a full blown script-able interface allowing direct interaction with command line programs and giving the user the ability to string together these programs into really powerful applications.  Because of the power of this interface, developers have over many years improved the ability to use it directly as well.  Things like <tab> completion are well known, but how about reverse command searches, built-in text editor mode, and shortcuts galore.  I have been trying to use more and more of this “built-in” bash functionality and so below are some of my favorite shortcuts and functionality.

Shortcuts:

Ctrl + A Go to the beginning of the line you are currently typing on
Ctrl + E Go to the end of the line you are currently typing on
Ctrl + L Clears the Screen, similar to the clear command
Ctrl + U Clears the line before the cursor position. If you are at the end of the line, clears the entire line.  Especially useful when you know you’ve mis-typed a password and want to start again.
Ctrl + K Cut the line after the cursor, inverse of the Ctrl + U
Ctrl + Y Pastes the content from a previous Ctrl + K or Ctrl + U cut.
Ctrl + H Same as backspace
Ctrl + R Search through previously used commands
Ctrl + C Sends SIGINT to whatever you are running (effectively terminating the program.)
Ctrl + D Exit the current shell
Ctrl + Z Puts whatever you are running into a suspended background process. fg restores it.
Ctrl + W Delete the word before the cursor
Ctrl + T Swap the last two characters before the cursor
Alt + T Swap the last two words before the cursor
Alt + F Move cursor forward one word on the current line
Alt + B Move cursor backward one word on the current line
Tab Auto-complete files and folder names (f there is a multiple option match hitting Tab twice will list all possible values.)
Alt + . Paste the previous commands final argument (great for running different commands on the same file path.)

To see a complete list of all bound bash shortcuts you can type

bind -P |less

but you may need to look-up some bash hex character values to understand all of them.  What is more you can actually set bound shortcuts to almost anything you can think of, including actual applications, for example:

$ bind -x ˜\C-e˜:firefox.

will launch the Firefox web browser from the command line when you hit Ctrl + e.

Another one of my favorite commands is fc (fix command.) If you simply type

fc

FC will copy your most recent bash history into your preferred editor (vi by default on most systems) and allow you to edit it within the editor.  If you save and exit the editor it will automatically copy it the contents into the bash session and hit enter.  Additionally if you are interested in editing some other history item you can type

fc -l

to get a full history with numbers beside them.  Then type

fc <num>

where<num> is the history number you want to edit.  In a former life my bash terminal and fc was all I needed for most SQL testing.

A Person Wrapped up in Himself

Package building under RPM hasn’t actually changed a whole lot in the last decade.  While I have notes scattered around the website on building and maintaining package repositories; the one part that has changed significantly is the use of git for version control.  Thankfully tagging, archiving, and building packages is pretty simple under git basically consisting of the following three steps:

  • git tag -a 1.1 -m “Version 1.1”
  • git archive –prefix=projectname-1.1/ 1.1 |bzip2 > ~/Temp/projectname-1.1.tar.bz2
  • rpmbuild -tb ~/Temp/projectname-1.1.tar.bz2

The -a option will create a “true” package tag although it will not be signed with a digital key.  Of course the rpmbuild command depends on correctly formatted spec file in the base of your project directory.  Make sure the spec file version and changelog have the same version number as your tag.  FYI for scripting purposes it is good to remember that changelog dates use the following date command format:

date +’%a %b %d %Y’

The reason I mention scripting is because I am working at extending my automated build script for software packages I manage.  Way back in my days at DPS I had developed a bash configuration that would allow me to download, package, and build a piece of software directly from the CVS repository.

When I moved to Cobb Engineering I also changes version control software and started using SVN.  Extending my previous script to supportboth CVS and SVN wasn’t too hard.  Now I have a number of personal projects at home as well as software examples I keep for my students at ITT-Tech; all of which is stored/managed in Git.  The new software package script is almost done but I would really like to be able to update a spec file, changelog, tag, package, and build with one command.

The most useful part of my build script is that is doesn’t require me to spend any time remembering how to use it.  By default is has both auto-complete as well as logical default behaviors.  I just run buildpackage and it will list the available projects that I have ready to build.  If I run buildpackage project it will present me with a list of versions that have already been tagged.  One of these days I will post it publicly but I seriously doubt there is much interest in the broader community as almost everyone who develops at this level seems to already have their own custom build  scripts.

more than you can afford to lose

I honestly don’t know why some links seem more appropriate in a blog; as compared to my freakishly huge bookmark list.

  • Getting Started with NoSQL – I tell my students that much of the support development they do in the future will be on MySQL and much of the new development they do themselves will be with NoSQL.  Good into to CouchDB, my current favorite.
  • Fedora Packing Guidelines for cpanspec – I started writing a script like cpanspec almost 8 years ago, but never finished because the complexity of figuring out CPAN dependencies was taking too much time away from actual development.  This thing is an absolute MUST for Perl developers using RPM based systems.
  • Renaming a Git repository stored in gitolite – You know a technology is a game changer when it not only solves problems you have but solves problems you didn’t even realize were problems.  Git is like that and gitolite is how I manage my git repositories.  After having to do a Google search on this… twice, I figure I better save the link.
  • Moving files from one git repository to another while preserving history – Title says it all.  The only thing to add is that this post includes a link to Linus’ “greatest git merge ever” post, which was not only a cool post (if you a total nut-job computer geek) but started a pretty amazing thread about “cool” git merges.
  • Using git archive – I use something like  git archive –prefix=proname-1.1/ 1.1 |bzip2 > proname-1.1.tar.bz2 to create my deployment packages on Linux. This is a nice document listing examples and use cases for git archive.  This only works if 1.1 is a branch or has been tagged via something like git tag -a 1.1 -m ‘Message about tag.’
  • Telling Linux to ignore a bad part of memory – Is memtest freaking out about some bad memory?  How about simply telling the Linux kernel not to use that chuck?  This modifies the grub options so the Linux kernel knows which part of memory not to use before it actually loads itself up.

We have been betrayed by both

I know this is basically a rant but, there seems to be a fundamental disconnect between people’s understanding of economics and reality.

Just to be absolutely clear, undue political influence by corporations is directly related to the power, breadth, and size of the government they work to influence. This means that, by its very nature, the enlargement (and especially centralization) of government works as an agent for the expansion of corporate influence and NOT, as many progressives hope, a counterbalance to it.

Corporatism is a symptom of the problem, not the cause. Any regulatory attempt to alleviate the pain caused by that symptom only acts, ultimately, to aggravate the problem.  While attention and public outcry may temporarily hide the influence of business; capital never looses attention and will quickly take over when politics has moved on.

Before some conservatives start yelling hallelujah from the roof-tops, understand the implications of this.  The opposite of supporting government is NOT support business because being pro business is effectually the same as being pro government. Ultimately business will work to extend its competitive advantage at the cost of consumer independence and there is no better way to extend a business advantage than to legislate one.  Remember, every monopoly throughout history was created by an act of government legislated preference.

The only solution to corporatism and socialism is capitalism, a real free market.  The free market is not just the only way to limit government influence, but it is the only way to limit corporate influence as well.

not merely necessary to life

After we have fastcgi working for Catalyst, we then need a proxy http service to actually to the page response work.  There are a number of solutions available to handle this but recently I have been messing with nginx.  nginx is a fastcgi compatible web server designed specifically for speed and quickness. While nowhere near as feature complete as Apache, it provides enough functionality to host some very large, vary busy web service companies.

In the interest of total disclosure, a fairly significant portion of the information I am provided I gleaned off an outstanding tutorial by Richard Wallman.  Basically you need to create a new server instance config file for your new nginx application proxy.  Open a text editor as root and create a new file /etc/nginx/config.d/mynewserver.conf and add following:

server {
server_name  app.mysite.org;
# Let’s have a server alias as well

access_log  /var/log/nginx/mysite.access.log;
root   /usr/share/nginx/html;

# Serve static content statically
expires +30d;
location /static {
add_header Cache-control public;
root /usr/share/nginx/html/root/;
}

# We pass the rest to our FastCGI application server
location /  {
# We also set some headers to prevent proxies
add_header Pragma “no-cache”;
add_header Cache-control “no-cache, must-revalidate, private, no-store”;
expires -1s;

# Where our FastCGI app server is listening
fastcgi_pass   127.0.0.1:8100;

include /etc/nginx/fastcgi_params;
fastcgi_param   SCRIPT_NAME     /;
fastcgi_param   PATH_INFO       $fastcgi_script_name;
}
}

If the line with fastcgi_pass is the same as the fastcgi ip address then you should be in good shape.  The other thing to notice about the configuration above tis the location /static this forwards requests for static content directly through nginx without using cgi.  This creates less overhead and faster responses for things like images, css, and javascript.

More later, my kids are in the middle of a mean rendition of chop sticks.

The great growling engine of change

To implement SSL through a Perl Catalyst application it is necessary to use an SSL proxy to relay the HTTP requests through HTTPS.  This setup also means we can use fastcgi for lightweight web calls instead of a full http server with all the overhead that requires.  That said, it has not been entirely straightforward setting-up the proxy.  Therefor, I have started some documentation on getting my current setup running.

This tutorial was what I used to get my companies Catalyst/CouchDB application running on a non-local environment because the official Catalyst tutorial was somewhat… lacking.

To get started we configure fastcgi.  Catalyst kindly provides a fastcgi handler as part of our build screens during project creation. To be able to use the fastcgi handler with a http proxie we need to setup Catalyst to use an internal ip port (much like using sockets or an internal bus) and then we configure our http proxy to listen to forward requests on that internal port.  A quick example test looks something like this:

script/myapp_fastcgi.pl -l 127.0.0.1:8100 -n 5

For long term use you will want to setup the system to run it as a service and start that service during boot.  Here is an example that works for sysinit on Fedora:

#! /bin/sh
### BEGIN INIT INFO
# Provides: catalyst-projectname
# Required-Start: $local_fs $network
# Required-Stop: $local_fs $network
# Default-Start: 3 5
# Default-Stop: 0 1 2 4 6
# Short-Description: Starts the FastCGI app server for the “projectname” catalyst site
# Description: The FastCGI application server for the “projectname” catalyst site
### END INIT INFO

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
SITE_HOME=/usr/share/nginx/html
DAEMON=$SITE_HOME/script/projectname_fastcgi.pl
OPTS=”-l 127.0.0.1:8100 -n 5″
NAME=projectname
DESC=”projectname Application Server”
USER=apache

test -f $DAEMON || exit 0
# Check that networking is up.
[ “$NETWORKING” = “no” ] && exit 0

# set -e
lockfile=”/var/lock/subsys/projectname”
pidfile=”/var/run/${NAME}.pid”

start_daemon()
{
echo -n “Starting $DESC: ”
# echo “$NAME.”
daemon $DAEMON $OPTS
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}

stop_daemon()
{
echo -n “Stopping $DESC: ”
# echo “$NAME.”
killproc -p $pidfile $NAME
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}

reload_daemon()
{
# echo “$NAME.”
stop_daemon
start_daemon
}

case “$1” in
start)
start_daemon
;;
stop)
stop_daemon
;;
reload)
reload_daemon
;;
restart|force-reload)
stop_daemon
sleep 5
start_daemon
;;
*)
N=/etc/init.d/$NAME
echo “Usage: $N {start|stop|reload|restart|force-reload}” >&2
exit 1
;;
esac

exit 0

I will post more soon, but for now this was what I needed to get things started.