Killing Me Softly

Not sure how it was possible that I have not done this before but I recently realized I needed to forcibly remove a user from a login session on a remote Linux system and didn’t have a better idea than simply killing off all their individual system processes one at a time.  Thankfully, Linux provides a much more useful way of dealing with kicking users from terminal sessions (and thereby shutting down their entire process tree as well.)

$who -u

Will give you a list of user sessions based on which terminal they are logged into.  This includes X sessions, virtual terminals, remote sessions, and any text mode logins.  The output should looks something like this:

bobby    :0              2011-04-21 20:01   ?            12122
bobby    pts/0        2011-04-21 20:01   .             12405 (:0)
bobby    pts/1        2011-04-21 20:01 02:10       12322 (:0)
root        pts/2       2011-04-21 22:19 .                13887 (10.0.0.101)

You can then kill the session login by looking at the last column and killing that process ID.  In the example above you can see there are two virtual terminals (i.e. the pts/X sessions), a remote session (the ssh session I am remotely accessing the machine on from host 10.0.0.101), and a single local login on terminal session :0.  Because the terminal session must be responsible for starting the virtual terminals, you can simply kill the process 12122 force a logout of all three sessions.

$kill 12122

Entirely too easy.  If you would like to be kind (I am NOT) and actually warn your users that you are bout to kick them off, you can send them a system message using the standard Unix wall command.  If you type wall you will get an open text area to type your message (end the message by clicking Ctrl+d) or you can pipe a message to standard input like so:

$echo “My name is Inigo Montoya. You killed my father. Prepare to die.” |wall

Wall will send a system message to every terminal session that allows messages (if you are root, that means everybody.)

To Any Place Worth Going

One of the best parts of Unix systems is that fundamentally they are built as development platforms.  The most common text command interface for Unix is call Bash (the Bourne Again Shell)and it is a full blown script-able interface allowing direct interaction with command line programs and giving the user the ability to string together these programs into really powerful applications.  Because of the power of this interface, developers have over many years improved the ability to use it directly as well.  Things like <tab> completion are well known, but how about reverse command searches, built-in text editor mode, and shortcuts galore.  I have been trying to use more and more of this “built-in” bash functionality and so below are some of my favorite shortcuts and functionality.

Shortcuts:

Ctrl + A Go to the beginning of the line you are currently typing on
Ctrl + E Go to the end of the line you are currently typing on
Ctrl + L Clears the Screen, similar to the clear command
Ctrl + U Clears the line before the cursor position. If you are at the end of the line, clears the entire line.  Especially useful when you know you’ve mis-typed a password and want to start again.
Ctrl + K Cut the line after the cursor, inverse of the Ctrl + U
Ctrl + Y Pastes the content from a previous Ctrl + K or Ctrl + U cut.
Ctrl + H Same as backspace
Ctrl + R Search through previously used commands
Ctrl + C Sends SIGINT to whatever you are running (effectively terminating the program.)
Ctrl + D Exit the current shell
Ctrl + Z Puts whatever you are running into a suspended background process. fg restores it.
Ctrl + W Delete the word before the cursor
Ctrl + T Swap the last two characters before the cursor
Alt + T Swap the last two words before the cursor
Alt + F Move cursor forward one word on the current line
Alt + B Move cursor backward one word on the current line
Tab Auto-complete files and folder names (f there is a multiple option match hitting Tab twice will list all possible values.)
Alt + . Paste the previous commands final argument (great for running different commands on the same file path.)

To see a complete list of all bound bash shortcuts you can type

bind -P |less

but you may need to look-up some bash hex character values to understand all of them.  What is more you can actually set bound shortcuts to almost anything you can think of, including actual applications, for example:

$ bind -x ˜\C-e˜:firefox.

will launch the Firefox web browser from the command line when you hit Ctrl + e.

Another one of my favorite commands is fc (fix command.) If you simply type

fc

FC will copy your most recent bash history into your preferred editor (vi by default on most systems) and allow you to edit it within the editor.  If you save and exit the editor it will automatically copy it the contents into the bash session and hit enter.  Additionally if you are interested in editing some other history item you can type

fc -l

to get a full history with numbers beside them.  Then type

fc <num>

where<num> is the history number you want to edit.  In a former life my bash terminal and fc was all I needed for most SQL testing.

A Person Wrapped up in Himself

Package building under RPM hasn’t actually changed a whole lot in the last decade.  While I have notes scattered around the website on building and maintaining package repositories; the one part that has changed significantly is the use of git for version control.  Thankfully tagging, archiving, and building packages is pretty simple under git basically consisting of the following three steps:

  • git tag -a 1.1 -m “Version 1.1”
  • git archive –prefix=projectname-1.1/ 1.1 |bzip2 > ~/Temp/projectname-1.1.tar.bz2
  • rpmbuild -tb ~/Temp/projectname-1.1.tar.bz2

The -a option will create a “true” package tag although it will not be signed with a digital key.  Of course the rpmbuild command depends on correctly formatted spec file in the base of your project directory.  Make sure the spec file version and changelog have the same version number as your tag.  FYI for scripting purposes it is good to remember that changelog dates use the following date command format:

date +’%a %b %d %Y’

The reason I mention scripting is because I am working at extending my automated build script for software packages I manage.  Way back in my days at DPS I had developed a bash configuration that would allow me to download, package, and build a piece of software directly from the CVS repository.

When I moved to Cobb Engineering I also changes version control software and started using SVN.  Extending my previous script to supportboth CVS and SVN wasn’t too hard.  Now I have a number of personal projects at home as well as software examples I keep for my students at ITT-Tech; all of which is stored/managed in Git.  The new software package script is almost done but I would really like to be able to update a spec file, changelog, tag, package, and build with one command.

The most useful part of my build script is that is doesn’t require me to spend any time remembering how to use it.  By default is has both auto-complete as well as logical default behaviors.  I just run buildpackage and it will list the available projects that I have ready to build.  If I run buildpackage project it will present me with a list of versions that have already been tagged.  One of these days I will post it publicly but I seriously doubt there is much interest in the broader community as almost everyone who develops at this level seems to already have their own custom build  scripts.

more than you can afford to lose

I honestly don’t know why some links seem more appropriate in a blog; as compared to my freakishly huge bookmark list.

  • Getting Started with NoSQL – I tell my students that much of the support development they do in the future will be on MySQL and much of the new development they do themselves will be with NoSQL.  Good into to CouchDB, my current favorite.
  • Fedora Packing Guidelines for cpanspec – I started writing a script like cpanspec almost 8 years ago, but never finished because the complexity of figuring out CPAN dependencies was taking too much time away from actual development.  This thing is an absolute MUST for Perl developers using RPM based systems.
  • Renaming a Git repository stored in gitolite – You know a technology is a game changer when it not only solves problems you have but solves problems you didn’t even realize were problems.  Git is like that and gitolite is how I manage my git repositories.  After having to do a Google search on this… twice, I figure I better save the link.
  • Moving files from one git repository to another while preserving history – Title says it all.  The only thing to add is that this post includes a link to Linus’ “greatest git merge ever” post, which was not only a cool post (if you a total nut-job computer geek) but started a pretty amazing thread about “cool” git merges.
  • Using git archive – I use something like  git archive –prefix=proname-1.1/ 1.1 |bzip2 > proname-1.1.tar.bz2 to create my deployment packages on Linux. This is a nice document listing examples and use cases for git archive.  This only works if 1.1 is a branch or has been tagged via something like git tag -a 1.1 -m ‘Message about tag.’
  • Telling Linux to ignore a bad part of memory – Is memtest freaking out about some bad memory?  How about simply telling the Linux kernel not to use that chuck?  This modifies the grub options so the Linux kernel knows which part of memory not to use before it actually loads itself up.

not merely necessary to life

After we have fastcgi working for Catalyst, we then need a proxy http service to actually to the page response work.  There are a number of solutions available to handle this but recently I have been messing with nginx.  nginx is a fastcgi compatible web server designed specifically for speed and quickness. While nowhere near as feature complete as Apache, it provides enough functionality to host some very large, vary busy web service companies.

In the interest of total disclosure, a fairly significant portion of the information I am provided I gleaned off an outstanding tutorial by Richard Wallman.  Basically you need to create a new server instance config file for your new nginx application proxy.  Open a text editor as root and create a new file /etc/nginx/config.d/mynewserver.conf and add following:

server {
server_name  app.mysite.org;
# Let’s have a server alias as well

access_log  /var/log/nginx/mysite.access.log;
root   /usr/share/nginx/html;

# Serve static content statically
expires +30d;
location /static {
add_header Cache-control public;
root /usr/share/nginx/html/root/;
}

# We pass the rest to our FastCGI application server
location /  {
# We also set some headers to prevent proxies
add_header Pragma “no-cache”;
add_header Cache-control “no-cache, must-revalidate, private, no-store”;
expires -1s;

# Where our FastCGI app server is listening
fastcgi_pass   127.0.0.1:8100;

include /etc/nginx/fastcgi_params;
fastcgi_param   SCRIPT_NAME     /;
fastcgi_param   PATH_INFO       $fastcgi_script_name;
}
}

If the line with fastcgi_pass is the same as the fastcgi ip address then you should be in good shape.  The other thing to notice about the configuration above tis the location /static this forwards requests for static content directly through nginx without using cgi.  This creates less overhead and faster responses for things like images, css, and javascript.

More later, my kids are in the middle of a mean rendition of chop sticks.

The great growling engine of change

To implement SSL through a Perl Catalyst application it is necessary to use an SSL proxy to relay the HTTP requests through HTTPS.  This setup also means we can use fastcgi for lightweight web calls instead of a full http server with all the overhead that requires.  That said, it has not been entirely straightforward setting-up the proxy.  Therefor, I have started some documentation on getting my current setup running.

This tutorial was what I used to get my companies Catalyst/CouchDB application running on a non-local environment because the official Catalyst tutorial was somewhat… lacking.

To get started we configure fastcgi.  Catalyst kindly provides a fastcgi handler as part of our build screens during project creation. To be able to use the fastcgi handler with a http proxie we need to setup Catalyst to use an internal ip port (much like using sockets or an internal bus) and then we configure our http proxy to listen to forward requests on that internal port.  A quick example test looks something like this:

script/myapp_fastcgi.pl -l 127.0.0.1:8100 -n 5

For long term use you will want to setup the system to run it as a service and start that service during boot.  Here is an example that works for sysinit on Fedora:

#! /bin/sh
### BEGIN INIT INFO
# Provides: catalyst-projectname
# Required-Start: $local_fs $network
# Required-Stop: $local_fs $network
# Default-Start: 3 5
# Default-Stop: 0 1 2 4 6
# Short-Description: Starts the FastCGI app server for the “projectname” catalyst site
# Description: The FastCGI application server for the “projectname” catalyst site
### END INIT INFO

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
SITE_HOME=/usr/share/nginx/html
DAEMON=$SITE_HOME/script/projectname_fastcgi.pl
OPTS=”-l 127.0.0.1:8100 -n 5″
NAME=projectname
DESC=”projectname Application Server”
USER=apache

test -f $DAEMON || exit 0
# Check that networking is up.
[ “$NETWORKING” = “no” ] && exit 0

# set -e
lockfile=”/var/lock/subsys/projectname”
pidfile=”/var/run/${NAME}.pid”

start_daemon()
{
echo -n “Starting $DESC: ”
# echo “$NAME.”
daemon $DAEMON $OPTS
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}

stop_daemon()
{
echo -n “Stopping $DESC: ”
# echo “$NAME.”
killproc -p $pidfile $NAME
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}

reload_daemon()
{
# echo “$NAME.”
stop_daemon
start_daemon
}

case “$1” in
start)
start_daemon
;;
stop)
stop_daemon
;;
reload)
reload_daemon
;;
restart|force-reload)
stop_daemon
sleep 5
start_daemon
;;
*)
N=/etc/init.d/$NAME
echo “Usage: $N {start|stop|reload|restart|force-reload}” >&2
exit 1
;;
esac

exit 0

I will post more soon, but for now this was what I needed to get things started.

Fear Lying Upon a Pallet

Almost all of of my recent work has been using NoSQL solutions, my favorite of which is Couchdb.  Easily the best feature of Couch is the RESTful JSON API that it uses to provide data.  Because you get your DB queries back to you directly as JavaScript objects, you don’t have to worry about application servers or middle tier systems for N-Tier development.  This is HUGE and make the whole web development (and given that most mobile applications are actually web apps) must cleaner, faster, and more functional for the end user.

Couch does have a couple weaknesses.  The one that has been giving me the most headaches is the lack of documentation for their parameters that the server can handle as part of the JSON View (map/reduce) phase. So here are a number that I have found useful over the last few months.  I will update this list as I find more.

  • key=abc The most commonly passed option to a given couchdb view.  This provides a way to select a single unique (well, I guess probably unique) key for a given view.  That said, view keys DON’T HAVE TO BE UNIQUE in couchdb.  Meaning, that if more than one result returns with abc this will also return those multiple results.
  • keys=[abc,123,ACC] A JSON encoded list of keys to use in a given map/reduce function.  Basically the same as above but without the need to call multiple network queries.
  • startkey=abc Used with endkey=abC to provide reange selection for a given view.  startkey will accept (as valid input) anything that would be valid in a standard couchdb view key, even JSON objects.  So think startkey=[a,]&endkey=[a,{}] to get a range of all keys=[a,somethingElse].
  • endkey=abC Counterpart of startkey, see the above reference.  One thing to note, it is generally better to specify another object and the end of a range if you want to inclusively select a range.  So {} is a better end range value than ZZZZZZZZ is.
  • limit=100 Select on the first N number of results.  This parameter is particularly useful for paginated return results (like “showing 1-100 of 399.)  Reduces network bandwidth for a given request.  Because map/reduce functions are cached upon request, the response time for the server isn’t any faster, but there is less data to download.
  • skip=100 Work with the above parameter limit to return a group result set.  For example you can limit the return result to 100 documents starting from 101 going through 200 (think email counts in gmail) with the ?limit=100&skip=100.
  • descending=true Reverses the return result order.  Will also work with limit, skip, startkey, etc…
  • group=true The default result for a given map/reduce function (which has been re-reduced) is a total, i.e. a single number.  In my case this is seldom the result I am actually looking for so this command provides the bridge between the full re-reduce and what I is most commonly sought, the groups result.  Your final results when this option have been passed it to return the reduced functions grouped by the map keys.  Instead of a single row with {key:null, value:9999} you will get multiple rows with the key being the name of the map key i.e [{key:”bob”,value:”444″},{key:”tom”,value:555}].  If you created design documents and view them inside of Futon, group=true is the default.  Which can be a little confusing when you actually try and make a JSON request and find you get a different result.
  • group_level=2 An alternative to the above parameter is the group_level option which will actual group the resulting reduce by the number of levels specified IF you key is an array matching at least that many arguments.  While the example above is for two levels the number can be as many array places as your key has.  This become particularly helpful when working with years and dates.  For a detailed example checkout this post.  That said, group=true is the functional equivalent of group_level=exact.
  • reduce=false Turn OFF the reduce function on a given map/reduce query.  This is the default if not reduce is defined but you can override it on views that DO have a reduce function if you only want the results of the map.
  • include_docs=true For map queries (that do not have a corresponding view) this option will include the original document in the final result set.  This means the structure of your JSON rows object will be {_id, key, value, doc} instead of the normal {_id, key, value}.  This will save you an additional JSON request if you are using the map query as a lookup for a particular database query.

that the ripest might fall

Git is simply amazing when it comes to branching and merging.  I probably have half a dozen branches, each with a unique feature I am working on.  Git makes combining and working with branches so easy that it simply seems natural to store test-functionality on different branches, across multiple repositories, and between different developers.

…and HOLY CRAP is it fast!

That said I hit a problem today that I had to hunt down the answer to, so I am posting it here for easy reference in the future.  The basic issue is that whenever you create a new branch that is then pushed to someone else as a remote branch, git automatically (as would be logical) associates those branches together because they share the same name.  This makes future pushes easy and allows other users to continually get updates when you have commits you want to make available.

The problem occurs when you try to PULL from that new remote branch (because the users or repository has made some of their own changes.)  Git does NOT automatically associate pulls with the remote branch of the same name, even though this associates pushes.  So how do you fix this?  The error message says to modify your config file and look at the git-pull man page, but that could quickly cause insanity based solely on the extent of the complexity of the command set for git.  I probably spent an hour looking through documentation.

The answer was, like you didn’t know where this was going, Google.  Ultimately you can the problem with a fairly simple command that WILL AUTOMATICALLY update your config.

git branch --set-upstream yourBranchName origin/yourBranchName

And that is it! Hopefully that saves someone the time that I lost.

Another fix I ran across comes from an initial pull from a remote repository that is empty. Because the original repository (empty) shares no files in common with my local repository (that I have added files to.) Git gets upset. The error generated looks something like this:

No refs in common and none specified; doing nothing.
Perhaps you should specify a branch such as ‘master’.
fatal: The remote end hung up unexpectedly

It can be fixed with the following command:

git push origin master

Then all remaining pushes should be fine as now you have a shared reference to use for future pushes.

Geniuses remove it

I am starting to believe that in the Poettering household, simplicity was considered a cancer that must be tortured and destroyed with extreme vigor.  Systemd is quickly becoming thoroughly ubiquitous in Linux systems everywhere.  While Systemd tries to do everything for everybody (it is supposed to eventually replace sysvinit, chkconfig, automount, logging, cron, and a whole host of other things) ultimately the primary intent of Systemd is to speed up the boot process.  It does this job exceedingly well.  This concern about boot time is a direct response to the speed of which other Unix based OSes boot and reference material even explicitly points to Apple as a reference.

That said, sysvinit did have one thing going for it… IT WAS SIMPLE.  Heck, just getting a list of available services is a pain in the ass now and generally requires looking up documentation just to remember how to do it.  Simple actions in systemd are annoyingly complex with a cheat sheet that looks like it was written by a Perl regular expression programmer on acid.  I will be the first to admit that systemd-analyze plot is pretty awesome and, considering that systemd was designed by the same guy who created PulseAudio, we should probably be thankful that it isn’t even MORE complex.  But still, something just seems wrong about using an all-for-everything program on an OS that was designed to be simple and efficient.