A Life Spent Making Mistakes

Couple other bash tips to help with more robust code.  The main improvement I learned from the previous link is the trap function.  This function lets you cleanup when specific system signals get sent from the OS, like INT (what gets sent to a program when Ctrl+c is typed) and the TERM signal.  A great example is:

trap “rm -f $lockfile; exit” INT TERM EXIT

In this case a lockfile is being removed just before closing a bash script.   You can get a full list of all the different system signals with the kill -l command.

The other major bash tool that I have used without ever really understanding what it did is the eval expression.  If you have ever written a sysinit configuration script, you know that you use eval to basically load/set variables from other subscripts or external files.  The reason eval does this is explained here.  The quick explanation is that eval forces bash to evaluate a second time any code reference passed to it.  So setting bash variables in-line is as easy as:

eval $(LANG=C grep -F “DEVICE=” ifcfg-$i)

One last bash builtin that may come in handy for some, the shopt function sets and displays bash built-in extended capabilities.  See the link for the man page with some examples.

Overall, I am consistently amazed at the power and flexibility of the Linux command line.

Trust the Engineer

(Update 11/20/2017: Additional link for AWS & new versions of Fedora.  Still a useful post.)

A client project along with my “Hacking & Countermeasures” class has recently necessitated a need for my own VPN for use in wireless applications. I needed to connect the VPN to my server rack and the system needed to be an “in-house” system I could turn up myself (sorry Cisco, no ASA for me.)  Finally, it needed to be an SSL based VPN solution as I have had entirely too many issues with locations filtering nonstandard Internet traffic effectively blocking IPSec VPN access on their networks.

I use Rackspace for my server infrastructure, so it only took me about 15 minutes to get the physical (errr… cloud… damn… whatever) Linux machine (Fedora 17 x64) up and running but actually setting up OpenVPN was significantly more challenging that I originally had considered.  The problem wasn’t the lack of documentation (actually the opposite was generally true.)  The problem is that VPN connectivity is so inherently picky, and there are SO many options, that getting a specific configuration running for a specific distribution can be a little overwhelming.

So, for my own personal benefit, here is some of the information I needed to get OpenVPN working on a Fedora 17 server routing http traffic as well as direct traffic to my private subnet.  OpenVPN will be configured to use port 443 (the standard web SSL port) using the TCP protocol.)  As OpenVPN uses SSL, and we will be using TCP on the HTTPS port, all the traffic will look like standard secure web traffic to the network, effectively keeping it from being filtered.

On the Server (as root):

  • Start by install openvpn and other support packages:
    • yum install openvpn pkcs11-tools pkc11-dump
  • We will use the easy-rsa script toolkit to create our shared keys.  So start by coping the example easy-rsa files into your home directory:
    • cp -ai /usr/share/openvpn/easy-rsa/2.0 ~/easy-rsa
    • cd ~/easy-rsa
  • Next you will need to edit the vars file.  Basically it is ID information for your server certificate.  The values other than the PKCS11_MODULE_PATH (which will be set to /usr/lib64/ on x64 machines) are not particularly critical but don’t leave them blank!  Mine looked something like this:

export KEY_COUNTRY=”US”
export KEY_PROVINCE=”OK”
export KEY_CITY=”Norman”
export KEY_ORG=”Rockerssoft”
export KEY_EMAIL=”name@emailaddress.com”
export KEY_EMAIL=name@emailaddress.com
export KEY_CN=rockerssoft-vpn
export KEY_NAME=rockerssoft-vpn-key
export KEY_OU=rockerssoft-vpn
export PKCS11_MODULE_PATH=/usr/lib64/

  • Now we generate our server keys and setup our openvpn service directories:
    • . vars
    • ./clean-all
    • ./build-ca
    • ./build-inter $( hostname | cut -d. -f1 )
    • ./build-dh
    • mkdir /etc/openvpn/keys
  • Now with our keys built, we need to copy all of them (along with our certificates and template configuration information) into our service directory.
    • cp -ai keys/$( hostname | cut -d. -f1 ).{crt,key} keys/ca.crt keys/dh*.pem /etc/openvpn/keys/
    • cp -ai /usr/share/doc/openvpn-*/sample-config-files/roadwarrior-server.conf /etc/openvpn/server.conf
  • The config file we just copied to /etc/openvpn/server.conf will need to be edited for your specific server configuration.  If you have problems connecting later on it is most like an issue with either the server configuration file or the client configuration file not matching.  As we want the system to be a full VPN proxy for all internet traffic start by adding the following to the BOTTOM of your config file:
    • comp-lzo yes
    • push "redirect-gateway def1"
  • In /etc/openvpn/server.conf, edit the port number and add a line to have openvpn use tcp instead of udp for port 443.  This should be somewhere between line 9 and 12 and should look something like this when you are done.

port 443
proto tcp-server

  • In /etc/openvpn/server.conf, edit the cert and key file location names somewhere between line 17 and 20.  Add the full path to your key/cert files we moved two steps previous.  They should look something like this (notice the /etc/openvpn/keys preceding each entry:)

tls-server
ca /etc/openvpn/keys/ca.crt
cert /etc/openvpn/keys/bob-vpn-1.crt
key /etc/openvpn/keys/bob-vpn-1.key
dh /etc/openvpn/keys/dh1024.pem

  • After you have modified your server configuration files, you will need to tell the Linux Security subsystem (aka SELinux) to recognize the to file layout.  To do this type the following command:
    • restorecon -Rv /etc/openvpn
  • If you need to test your server settings you can run openvpn directly, say to debug your config file,  this way (press Ctrl+c to stop it):
    • openvpn /etc/openvpn/server.conf
  • Finally, you can turn the openvpn server on and enable it so that it starts during future reboots as well.
    • systemctl enable openvpn@server.service
    • systemctl start openvpn@server.service
  • Now that the server is running you will need to configure the firewall to allow vpn traffic connections AND route all your traffic through the system (via Network Address Translation.)  Start by backing up your old iptables configuration and enabling NAT forwarding in the Linux kernel:
    • mv /etc/sysconfig/iptables /etc/sysconfig/iptables.old
    • sysctl -w net.ipv4.ip_forward=1
  • Open up your favorite text editor and copy the following iptable rules into the file.  You will need to save the file as /etc/sysconfig/iptables.  This configuration assumes that eth0 is your public IP address and eth1 is your private.  If this is backwards just change eth0 to eth1 and vice versa.  Also it keeps port 22 open for ssh connectivity.

# Modified from iptables-saved by Bob Rockers
*nat
:PREROUTING ACCEPT [15:1166]
:INPUT ACCEPT [4:422]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [118860:18883888]
-A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state –state NEW -m tcp –dport 22 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp –dport 443 -j ACCEPT
-A INPUT -i tun+ -j ACCEPT
-A INPUT -j REJECT –reject-with icmp-host-prohibited
-A FORWARD -i tun+ -j ACCEPT
-A FORWARD -i eth1 -o tun+ -j ACCEPT
-A FORWARD -i eth0 -o tun+ -m state –state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -j REJECT –reject-with icmp-host-prohibited
COMMIT

  • To make NAT work across reboots you will need to modify the /etc/sysctl.conf file and change the line net.ipv4.ip_forward = 0to the following:
    • net.ipv4.ip_forward = 1
  • To make everything permanent type the following:
    • sysctl -p /etc/sysctl.conf
  • Now restart your firewall configuration:
    • systemctl restart iptables.service

That should take care of our server configuration. I will follow this post up with client configurations for Windows and Fedora 17 KDE installs. Please feel free to email any fixes/updates to the above configuration if you see something.  Below are a couple of the links I used to get this configuration working:

Finally, the above solution is susceptible to a man-in-the-middle attack from another client impersonating the server (not a problem for my setup as I personally know everyone who I have issued client certificates to.)  The solution is sign the server certificate with a tls-server only key and force clients to check this status on connection.  There is more documentation for this setup here and specifics about the easy-rsa setup here.  At some point I will update this tutorial to fix that issue but, for now, this has been a long enough post.

But what’s my motivation?

Scripts in generally and bash in particular fill an enormous amount of my time.  The ability to create scripts that can handle a number of diverse inputs is directly related to how flexible and robust the code-base is.  The most common problem when handling files in Bash is the problem with spaces.  Linux is both case sensitive and handles spaces with less… grace… than some OSes.  Bash suffers from these same issues.  The easiest way to handle this is with the IFS system variable.  IFS is simply the field delimiter for Linux (i.e. white space) and, because it is a modifiable system variable, you set it to something that you will not run into.  For example:

#!/bin/bash
KEEPOLDVALUE=$IFS
IFS=$(echo -en “\n\b”)
for var in *
do
# Do something with each line of a file
echo “$var”
done
IFS=$KEEPOLDVALUE

That will solve the problem dealing with spaces when developing simple/basic scripts that are written for quick and dirty system management.  That said, when you are building scripts to use regularly you will need to do be a more comprehensive when testing your script.

A good place to start is by setting -u. Whenever testing new scripts, try running them without any arguments but WITH -u. If you fail to correctly initialize your variables running them with -u will warn you that there is a problem. For example:

$bash -u /tmp/mynewtestscript.sh
/tmp/mynewtestscript.sh: line 34: $DIRNAME: unbound variable

We can then verify that we have (at the very least) correctly initialized any variables that we will use and reduce the probability of side-effects.

A problem I ran into a lot with my early script creation was that I often needed standard output from one command to be sent to another command as command line input (as opposed to standard input.) The best way to solve this problem with using the bash built-in command execution form, for example:

echo $(ls)

But this isn’t always very elegant to implement directly, so another options is the wonderful xargs command.  xargs breaks the output of one command into individual arguments that it feeds to another command.  This allows you to use standard piping between otherwise un-pipeable commands.  For example:

ls | xargs echo

Sometime joining two vars can be complicated when those var names need characters between them.  To solve this you can you can use parameter substitution.  What this means, effectively, is that the var $tempvar and ${tempvar} are the same thing.  This allows you to combine variables with in-between characters without concern.

_a=”test”
_b=”/file”
newvar=${_a}folder${_b}

Another useful tip (found via this article from hacktux) is the mktemp executables for temporary file creation.  Needing a temp file to store intermediate data try the following:

tempfile=$(/bin/mktemp)
tempdir=$(/bin/mktemp -d)

Another common problem for bash scripts being used for administration is that they need to be run as root (or sudo root on Ubuntu systems.)  The way to solve this is to check the EUID environmental variable.  Root will always be 0 for EUID and you can put a simple check at the beginning of your script with the following:

if [[ $EUID -ne 0 ]]; then
echo “This script must be run as root” 1>&2
exit 1
fi

Need a random number of characters for your bash script?  Use dd and /dev/random to get a variable number of characters.  For example:

random=”$(dd if=/dev/urandom bs=3 count=1)”

Will give you three random characters (stored in $random) out of urandom current entropy pool.  Unfortunately the character  set is likely to be UTF-32 giving you a bunch of ?? symbols.  To convert those to base64 encoding just pipe the output through base64 (the conversion process may likely give you more than 3 characters to be sure to regexp to the number of characters you need):

random=”$(dd if=/dev/urandom bs=3 count=1 | base64)”

Starting Dropbox

My brother wanted a quick explanation on how to create an executable to start Dropbox. While I was helping him he was kind enough to mock my freakishly awesome IBM Model M Unicomp keyboard… the greatest keyboard in the world. This setup is designed to work the the local tar.gz install of Dropbox on Linux and NOT the rpm based install (that requires Gnome for the file manager.)

Create a new file in your ~/bin directory called startdropbox.sh with the following content

#!/bin/bash
~/.dropbox-dist/dropboxd &

After you have saved the file make the file executable by typeing

chmod 755 ~/bin/startdropbox.sh

Now you can start up dropbox by clicking on that icon at any time.

AND I LOVE MY CLICKY KEYBOARD BITCHES!!!

Killing Me Softly

Not sure how it was possible that I have not done this before but I recently realized I needed to forcibly remove a user from a login session on a remote Linux system and didn’t have a better idea than simply killing off all their individual system processes one at a time.  Thankfully, Linux provides a much more useful way of dealing with kicking users from terminal sessions (and thereby shutting down their entire process tree as well.)

$who -u

Will give you a list of user sessions based on which terminal they are logged into.  This includes X sessions, virtual terminals, remote sessions, and any text mode logins.  The output should looks something like this:

bobby    :0              2011-04-21 20:01   ?            12122
bobby    pts/0        2011-04-21 20:01   .             12405 (:0)
bobby    pts/1        2011-04-21 20:01 02:10       12322 (:0)
root        pts/2       2011-04-21 22:19 .                13887 (10.0.0.101)

You can then kill the session login by looking at the last column and killing that process ID.  In the example above you can see there are two virtual terminals (i.e. the pts/X sessions), a remote session (the ssh session I am remotely accessing the machine on from host 10.0.0.101), and a single local login on terminal session :0.  Because the terminal session must be responsible for starting the virtual terminals, you can simply kill the process 12122 force a logout of all three sessions.

$kill 12122

Entirely too easy.  If you would like to be kind (I am NOT) and actually warn your users that you are bout to kick them off, you can send them a system message using the standard Unix wall command.  If you type wall you will get an open text area to type your message (end the message by clicking Ctrl+d) or you can pipe a message to standard input like so:

$echo “My name is Inigo Montoya. You killed my father. Prepare to die.” |wall

Wall will send a system message to every terminal session that allows messages (if you are root, that means everybody.)

scientia potentia est

What most people are objecting to is that the market gives people what the people want instead of what the person talking thinks the people aught to want.
–Milton Friedman

Milton Friedman is easily the most influential economist since John Maynard Keynes. What makes him such a powerful voice for the free market is his ability to distill complex macro economic theory into chunks non-economists can easily understand.  He is so influential, and understandable, that PBS actually produced a series with him explaining economic ideas and debating these thoughts with other prominent scholars, politicians, and businessmen. The series was called “Free To Choose.”

Unfortunately most of us don’t have hours worth of time to watch all the episodes (although you should.)  To get a quick overview of each of his core concepts Trent Liberty has produced a series of 7 videos called The Friedman Series.  The background audio ranges from amazing to annoying, but the topic selection is outstanding.  If you get nothing else from the video, always remember that the biggest danger to liberty isn’t inequality, but the sincerity of the well intentioned.

To Any Place Worth Going

One of the best parts of Unix systems is that fundamentally they are built as development platforms.  The most common text command interface for Unix is call Bash (the Bourne Again Shell)and it is a full blown script-able interface allowing direct interaction with command line programs and giving the user the ability to string together these programs into really powerful applications.  Because of the power of this interface, developers have over many years improved the ability to use it directly as well.  Things like <tab> completion are well known, but how about reverse command searches, built-in text editor mode, and shortcuts galore.  I have been trying to use more and more of this “built-in” bash functionality and so below are some of my favorite shortcuts and functionality.

Shortcuts:

Ctrl + A Go to the beginning of the line you are currently typing on
Ctrl + E Go to the end of the line you are currently typing on
Ctrl + L Clears the Screen, similar to the clear command
Ctrl + U Clears the line before the cursor position. If you are at the end of the line, clears the entire line.  Especially useful when you know you’ve mis-typed a password and want to start again.
Ctrl + K Cut the line after the cursor, inverse of the Ctrl + U
Ctrl + Y Pastes the content from a previous Ctrl + K or Ctrl + U cut.
Ctrl + H Same as backspace
Ctrl + R Search through previously used commands
Ctrl + C Sends SIGINT to whatever you are running (effectively terminating the program.)
Ctrl + D Exit the current shell
Ctrl + Z Puts whatever you are running into a suspended background process. fg restores it.
Ctrl + W Delete the word before the cursor
Ctrl + T Swap the last two characters before the cursor
Alt + T Swap the last two words before the cursor
Alt + F Move cursor forward one word on the current line
Alt + B Move cursor backward one word on the current line
Tab Auto-complete files and folder names (f there is a multiple option match hitting Tab twice will list all possible values.)
Alt + . Paste the previous commands final argument (great for running different commands on the same file path.)

To see a complete list of all bound bash shortcuts you can type

bind -P |less

but you may need to look-up some bash hex character values to understand all of them.  What is more you can actually set bound shortcuts to almost anything you can think of, including actual applications, for example:

$ bind -x ˜\C-e˜:firefox.

will launch the Firefox web browser from the command line when you hit Ctrl + e.

Another one of my favorite commands is fc (fix command.) If you simply type

fc

FC will copy your most recent bash history into your preferred editor (vi by default on most systems) and allow you to edit it within the editor.  If you save and exit the editor it will automatically copy it the contents into the bash session and hit enter.  Additionally if you are interested in editing some other history item you can type

fc -l

to get a full history with numbers beside them.  Then type

fc <num>

where<num> is the history number you want to edit.  In a former life my bash terminal and fc was all I needed for most SQL testing.

A Person Wrapped up in Himself

Package building under RPM hasn’t actually changed a whole lot in the last decade.  While I have notes scattered around the website on building and maintaining package repositories; the one part that has changed significantly is the use of git for version control.  Thankfully tagging, archiving, and building packages is pretty simple under git basically consisting of the following three steps:

  • git tag -a 1.1 -m “Version 1.1”
  • git archive –prefix=projectname-1.1/ 1.1 |bzip2 > ~/Temp/projectname-1.1.tar.bz2
  • rpmbuild -tb ~/Temp/projectname-1.1.tar.bz2

The -a option will create a “true” package tag although it will not be signed with a digital key.  Of course the rpmbuild command depends on correctly formatted spec file in the base of your project directory.  Make sure the spec file version and changelog have the same version number as your tag.  FYI for scripting purposes it is good to remember that changelog dates use the following date command format:

date +’%a %b %d %Y’

The reason I mention scripting is because I am working at extending my automated build script for software packages I manage.  Way back in my days at DPS I had developed a bash configuration that would allow me to download, package, and build a piece of software directly from the CVS repository.

When I moved to Cobb Engineering I also changes version control software and started using SVN.  Extending my previous script to supportboth CVS and SVN wasn’t too hard.  Now I have a number of personal projects at home as well as software examples I keep for my students at ITT-Tech; all of which is stored/managed in Git.  The new software package script is almost done but I would really like to be able to update a spec file, changelog, tag, package, and build with one command.

The most useful part of my build script is that is doesn’t require me to spend any time remembering how to use it.  By default is has both auto-complete as well as logical default behaviors.  I just run buildpackage and it will list the available projects that I have ready to build.  If I run buildpackage project it will present me with a list of versions that have already been tagged.  One of these days I will post it publicly but I seriously doubt there is much interest in the broader community as almost everyone who develops at this level seems to already have their own custom build  scripts.

We have been betrayed by both

I know this is basically a rant but, there seems to be a fundamental disconnect between people’s understanding of economics and reality.

Just to be absolutely clear, undue political influence by corporations is directly related to the power, breadth, and size of the government they work to influence. This means that, by its very nature, the enlargement (and especially centralization) of government works as an agent for the expansion of corporate influence and NOT, as many progressives hope, a counterbalance to it.

Corporatism is a symptom of the problem, not the cause. Any regulatory attempt to alleviate the pain caused by that symptom only acts, ultimately, to aggravate the problem.  While attention and public outcry may temporarily hide the influence of business; capital never looses attention and will quickly take over when politics has moved on.

Before some conservatives start yelling hallelujah from the roof-tops, understand the implications of this.  The opposite of supporting government is NOT support business because being pro business is effectually the same as being pro government. Ultimately business will work to extend its competitive advantage at the cost of consumer independence and there is no better way to extend a business advantage than to legislate one.  Remember, every monopoly throughout history was created by an act of government legislated preference.

The only solution to corporatism and socialism is capitalism, a real free market.  The free market is not just the only way to limit government influence, but it is the only way to limit corporate influence as well.

not merely necessary to life

After we have fastcgi working for Catalyst, we then need a proxy http service to actually to the page response work.  There are a number of solutions available to handle this but recently I have been messing with nginx.  nginx is a fastcgi compatible web server designed specifically for speed and quickness. While nowhere near as feature complete as Apache, it provides enough functionality to host some very large, vary busy web service companies.

In the interest of total disclosure, a fairly significant portion of the information I am provided I gleaned off an outstanding tutorial by Richard Wallman.  Basically you need to create a new server instance config file for your new nginx application proxy.  Open a text editor as root and create a new file /etc/nginx/config.d/mynewserver.conf and add following:

server {
server_name  app.mysite.org;
# Let’s have a server alias as well

access_log  /var/log/nginx/mysite.access.log;
root   /usr/share/nginx/html;

# Serve static content statically
expires +30d;
location /static {
add_header Cache-control public;
root /usr/share/nginx/html/root/;
}

# We pass the rest to our FastCGI application server
location /  {
# We also set some headers to prevent proxies
add_header Pragma “no-cache”;
add_header Cache-control “no-cache, must-revalidate, private, no-store”;
expires -1s;

# Where our FastCGI app server is listening
fastcgi_pass   127.0.0.1:8100;

include /etc/nginx/fastcgi_params;
fastcgi_param   SCRIPT_NAME     /;
fastcgi_param   PATH_INFO       $fastcgi_script_name;
}
}

If the line with fastcgi_pass is the same as the fastcgi ip address then you should be in good shape.  The other thing to notice about the configuration above tis the location /static this forwards requests for static content directly through nginx without using cgi.  This creates less overhead and faster responses for things like images, css, and javascript.

More later, my kids are in the middle of a mean rendition of chop sticks.