Of Experience and Competence, Intelligence and Wisdom

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

Stephen Hawking

There’s a basic misunderstandings that causes people to be perceived as more capable than they actually are.  A common mistake, when evaluating capability, is to associate experience with competence.  To confuse intelligence with wisdom.  In the former people have a lot of exposure to a given situation, in the latter the person has distilled that exposure in a way that allows them to gain deeper insight into future situations.

People understand this intuitively but unfortunately the vast majority of companies forget/ignore the difference when hiring people.?Even when conscience of the difference, people will go about trying to identify competence by “digging deep” into their experience or by requiring a minimal amount of experience.?Both methods are flawed and provide no correlation to the likely success of a hire.1

Experience is necessary, but not sufficient for competence (if you want to use a term from mathematical causality.)  To move from experience to competence you generally have to have done 2 things:  1) had multiple experiences to draw from that are specific to the situation you need to learn from, and 2) are intentional about questioning the lessons from those experiences. Unfortunately most people don’t question what they have done in a way that make subsequent experience testable against their assumption. Instead, they just assume the know the reasons for it. 2

For example,  I drive around 50 miles a day in my commute.  As such I almost certainly have, over the course of my professional career, driven many many more miles than rookies starting off in the Indy 500.  It lacks specificity because I don’t drive comparable car, in a comparable environment, with comparable traffic. I have lots of “experience” driving a car, but that experience lacks any validity for the wisdom need to drive in the Indy 500.   

Intentionality is much much harder to evaluate. Most people will speak with conviction of the opinions they’ve developed through their experience and will sound like they have wisdom. Assuming you have filtered someone for the specificity of their experience, what is the best way to evaluate the intentionality of those experiences.?There is no perfect solution but here are a couple handy rules-of-thumb I’ve learned are over time.

A fool thinks himself to be wise, but a wise man knows himself to be a fool.

William Shakespeare

1. Don’t trust someone who gives you absolute answers. People with wisdom will almost always say things like “well it depends” and “in our situation” because they have enough understanding to know the limits of their own experience. Almost all experiences have a range of best solutions based on tons of criteria that are always dependent on outside variables. Competent people know that almost no two situations are identical and will hedge their answers accordingly.

If you’re not failing, you’re not pushing your limits, and if you’re not pushing your limits, you’re not maximizing your potential.

Ray Dalio

2. Look for people who have not only had experience but have also failed in that experiences. Inversely, distrust anyone that doesn’t offer up those failures when discussing their experience. People who have not “failed” generally lack the wisdom of those who have but much much worse are those that hide or don’t acknowledge those failures because it means they don’t value failure as a method for learning. It means they have an immature concept of what failure is, and competence can never be found in immaturity.

Feedback is the breakfast of champions. Winners use it to upgrade their performance; losers use it as an excuse to fail.

Ken Blanchard

3. How does a candidate, consultant, or colleague respond when you push back on the assumptions they developed from their experience. Do they look for more details or spend time thoughtfully considering how your experience might be different.?Are they stubborn in their convictions and see all situations as black or white. Look for people who have strong opinions, loosely held, because wisdom comes from constant feedback and continuous improvement. There is no improvement without being open to feedback.

A prudent question is one half of wisdom.

Francis Bacon

4. Great questions are the best indicator of competence. The source of the biggest mistakes people have in their career are not usually due to having the wrong answers, but having the right answer to the wrong questions. Find people who ask thoughtful questions!3 Great questions are harder to fake, take more insight, and are a much better indicator of competence than great answers.

The four techniques above are not magic bullets but they can dramatically increase the likelihood of getting a competent hire instead of just an experienced hire. The techniques also help with existing talent. One of my favorite ways to evaluate long term potential is by looking at how quickly a given person can turn experience into wisdom. People who can do that quickly are the future rock stars of your company.

  1. This can be exceptionally difficult for organizations that are looking for capabilities outside of their existing strengths.  They can filter and hire for people with experience but it is very very difficult to evaluate their overall wisdom if you don’t have someone internally who can accurately and unbiasedly evaluate their competence. ↩︎
  2. A great indicator of whither someone is moving from experience to competence is if you see them pushing the boundaries of those experiences to test the validity in edge case scenarios.  They are isolating more and more of the variables that have an opportunity to effect the outcome… thus allowing them to make more accurate assumptions about future experiences. ↩︎
  3. Evaluating these questions can be a difficult ask for companies that don’t have someone capable of evaluating good questions. In such cases it is probably best to bring in an outside expert to help because it is easy for non-technical people to be snowballed by technical answers if they don’t have domain specific experience. ↩︎

The Tools We Use

“It is not only the violin that shapes the violinist, we are all shaped by the tools we train ourselves to use, and in this respect programming languages have a devious influence: they shape our thinking habits.”

– Edsger W. Dijkstra

Object Oriented languages suffer from a forced paradigm that fits more for UI design than solving the large scale data problems we encounter today. Languages like Java force a developer to speak in nouns and create artificial structures that don’t actually represent the systems we encounter.  The result is an inheritance tree with more and more abstract ideas being forced into shapes that don’t make sense.

Remember the QuickTime volume control? For anyone who isn’t old enough to remember, Apple QuickTime used to have a scroll-wheel volume control.  It was meant to be intuitive to the user so they could easily understand how to adjust volume.  The actual results where not what was intended.  Most users took their mouse, grabbed the tiny edge of the wheel, moved it a quarter inch up or down, released, and re-grabbed it wheel over and over until they got the volume correct.  It was painful, annoying, and substituted 30 seconds of UI training for a lifetime of frustration.

Forcing a bad model on users only works to limit the usefulness of the problem being solved.  Worse, instead of accelerating progress it debilitates our understanding of the underlying issue.  Software language paradigms multiply this mis-model understanding 100x resulting in overly complex systems, that are harder to change, and even harder to maintain.

There are lots of examples of this kind of round peg, square hole problem.  SQL is another example.  Large scale scheme enforced structured data is a solved problem.  SQL and SQL databases have done an amazing job of getting performance and consistency from data; but those aren’t issues with most startups.  Flexibility, velocity, cost, and non-specialization are the primary problems of the success of small startup teams; ACID compliance is not.   Developers with deep experience in SQL databases often have the drawback of thinking of most data like it is a table; which limits options and slows the development cycle compared to less structured data.

This has become a bigger problem because of the prevalence of unstructured data and the usefulness of map-reduce in processing that data in ways are particularly useful in web-scale systems.

The point isn’t to suggest that there isn’t a place for things like OOP development or SQL databases, but software shops that rely exclusively on these tools are dramatically limiting themselves.  A carpenter that only uses hammers, hand saws, and screw drivers is at a decided disadvantage to one that uses those tools AND a 3D printer.  The abstract thinking required to use these tools is definitely a hurdle to overcome, but the payoff is substantial.

Defend Itself no Matter how Small

I have been doing large scale deployments of Raspberry Pi’s for some of my students and their class projects; and after doing… say.. two of them decided it would be easier to script the initial setup.  The process isn’t hard but I thought I would document it in case anyone else was in a similar situation.

I start by connecting the Pi’s to a network via cable (some people carry handkerchiefs, I carry switches.)  Raspbin starts with DHCP enabled and SSH configured for a default user, meaning we can use that to get the wireless configured.  Here is basically what I do in my script.

Getting Connected

Start by doing a port scan for any ssh connections on the network once the Pi is attached. For example:

nmap -T5 -n -p 22 –open –min-parallelism 200 172.16.0.0/24

We do this to pre-load our local arp table with IP & MAC addresses.  This will speed up the process of finding any Raspberry registered MAC addresses  (Raspberry has their own MAC range.)  You can then search for Raspberry nics’ specifically by doing:

arp -a | grep b8:27:eb

You should
SSH (or better yet copy your private key via ssh-copy-id) to the IP address(es) returned from the above command.  Make sure to change the password afterwards.  The default username and password for the SSH connection are:

Username: pi
Password: raspberry

Wireless Configuration

Plugin your wireless USB (unless you have a PI3 or later) and run the following command to see the wireless card:

iw dev

The result will be a list of physical wireless devices.  You’re looking for the entry next to Interface mostly likely something  like wlan0.   Run the iwlist command to get a list of wireless access points you can connect to.

iwlist wlan0 scanning

Specifically you’re looking for the value next to ESSID.  Find the one you want to connect to. To setup the encryption for secure wireless run the following command to add a specific network entry for your ESSID.  Replace XXXX with the ESSID name you want to connect to.

wpa_passphrase “XXXX” >> /etc/wpa_supplicant/wpa_supplicant.conf

Now type the wireless access point password and hit enter.  Finally restart the wireless interface to load the new network and get an IP address.  Replace wlan0 with the Interface name you used for scanning a couple steps above.

ifdown wlan0
ifup wlan0
ifconfig wlan0

The ifconfig is to see what your new wireless IP address is.  You can then safely disconnect the wired network cable and SSH back into the PI on the wireless nic.  The PI can safely be restarted at this point as the wireless will auto-connect on restart.

All Your Links Are Belong to Us

I have over 200 tabs open between three computers… the insanity must stop. Link dump to follow:

Software Engineering

  • Road to Continuous Delivery – Great article covering the different stages development shops go though to get to continuous delivery.  Provides a great starting checklist of what to work on while improving your software delivery process.
  • Overview of Micro Services – Micro services have become massively popular with the advent of Node.js.  Intro to the concepts are reasoning for using micro services.
  • Continuous Code Coverage with GCC, Hudson, and Googletest– Part of continuous delivery is continuous testing of your code base.
  • Trashing ChromeOS – Guide to building ATOM processor build-chain testing servers from Chromebooks.
  • Don’t Be Afraid of Functional Programming – The parts of JavaScript I actually like are its functional programming capabilities (callbacks and first class functions baby.)  OOP programmers get a little scared of things like Lisp, but they shouldn’t be.
  • Tessel – A JavaScript compatible (via Node.js) microcontroller.  Wow.. just wow.. Includes wifi built-in and shield/breakout board compatability through Node modules.

Source Code

  • DevDocs – Seriously hell yeah!  Online, simple, clean, extensive, software documentation for programming languages and libraries.  Seriously, browser pin this now!
  • Sourcegraph – Search tens of thousands of code examples.
  • Explainshell – Type some bash in, it explains what it is and what it does.  Think man pages for the internet age.
  • LibCurl API Reference – Using Curls powerful web functionality inside of C.

Vim

  • Smart Tabs – Tabs in the leading spaces, spaces for everything else.  The way GOD intended code to be structured!
  • 76 Vim Shortcuts – Some of these I already knew, but like all things Vim… there is always more to learn.
  • Mapping Standard Shortcuts – Things like Ctrl + c for copy.  I don’t actually remap my Vim shortcuts to match but I like the article because it explains what those shortcuts are used for by default.
  • Cheat Sheet – My current favorite Vim cheat sheet.  Simple and easy to search for things… and NO ads.
  • Vim-Adventures – Learn Vim shortcuts while playing an online game.
  • Exuberant CTags – You need more jumping around in your code.  Make it easy to swtich between headers, declarations, and usage locations in your projects.

C Programming

  • JANSSON – JSON in C.  Seriously freaking awesome!
  • Ncurses Programming in C – Another Linux Documentation Project about programming.  Ncurses is a command line interface for gcc.
  • Coding Unit, C Tutorial – Great introduction to C programming.  Simple examples and be sure to check out the comments below each section.
  • TutorialsPoint, The C – Another introductory guide to C development.  Better as a reference guide that the one from Coding Unit.
  • TDD in C – Love test driven development (not so much behavior driven development.) Simply way to due it in C without external dependencies.  Here is a list of tools if you do want to use external libraries for TDD.
  • Beginners Guide to Linkers – When compiling doesn’t kill you, the linker will.
  • Going from C to Go – I really like Go, but am currently doing a fairly serious C project.  Just in case I ever want to port it.

Terminal/Serial Programming

  • Terminos – C serial interface library for GCC.  The link is to the manpage with function examples.
  • Serial Programming – The Linux Documentation Project examples for serial programming in C.  Also look at the debugging section.
  • QTSerialPort – The Qt Library for Serial port communications.  Qt is easily the best C++ code library in existence.  While most people think of Qt only when coding GUI applications; its libraries are extensive enough to use for ANY application… even from the command line.
  • Stackoverflow Serial Examples in C – Couple good examples and they got me some of my first working C Serial code.
  • RS-232 Library – Works on both Linux and Windows.
  • Serial Programming in Linux – Wikibooks book style tutorial on serial programming in Linux.  Here is the specific section on Terminos.
  • WiringPi – Serial programming interface on the RaspberryPi.  Very nice if you are using the Pi breakout pins.
  • Serial Example – Another quick example by tty1.

Cloud/Web

  • aws – The Amazon Web Service command line tool.  One of the better reference pages I have seen on it.
  • Web Graphs & Visualization – 30 tools for web based data visualization. Both Sas and Open Source tools listed.
  • Let’s Make a Bubble Map – Thematic mapping.  Includes a link to a D3 tutorial for creating bubble map.s
  • Camlistore – A personnel, decentralized, non-heiarchy based storage system that can be synced between cloud, phone, computer, and anything you can think of.
  • JSON Form Editor – Similar to a project I did myself a while back.  Automatically create forms based on simple JSON structures.  Makes it easy to to AJAX requests to build, populate, and check forms.

Cyber Security

SSH

  • Rsync & SSH – Combining two of the most powerful software utilities in existence to make backups.
  • StormSSH – I actually created a series of bash scripts to do SSH bookmarking.  Storm improves on this idea by directly editing your ssh config file with the stored entries. Wish there wasn’t a Python dependency.
  • SSH Kung Fu – Great tutorial covering some of the many of the one-off capabilities of SSH like remote folder mounting, port forwarding, and connection sharing.
  • Simplify With SSH Config – Good overview of some SSH configuration and setup options.  Plays well along side the SSH Kung Fu link above.
  • Autocomplete Hostnames – Process your hosts file under /etc/ as well as your config files for autocomplete.  I used my hosts file as a blacklist so this doesn’t work as well for me but the information was useful for the SSH bookmark system I made.
  • Commandline Fu SSH Autocomplete – A few ways to populate the autocomplete functionality for SSH.

Bash

  • Better Bash – Simple suggestions on making better bash scripts.
  • Command Tips & Tricks — Nice overview of some tip using Bash, Vim, networking, and general command line productivity.
  • Tmux Cheatsheet – If you know what tmux or screen are… then this is pretty helpful.  Otherwise you need to find out what tmux is.
  • Defensive Bash Programming – More and better ways to organize your bash programs.
  • BashGuide – Better bash than the Gnu Bash guide… or at least some people say so.  Evidently it has fewer “bugs” in its code examples.

Linux Bluetooth

  • Bluetooth on Fedora – A number of Bluetooth devices need re-pairing (or at least a user to be logged into the system) before they will connect after restart.  This is particularly frustrating for Bluetooth mice and keyboards.  Following the advice by the selected answer (as root0 solved the problem for me.  This one was annoying and hard to find out.
  • ThinkPad Compact Bluetooth Keyboard – Cannot wait until this keyboard driver is put in the mainline kernel so I don’t have to build it every time.  This keyboard allows me to use the same keyboard when using my Carbon X1 or when at my desk.

Misc

  • Clark DuVall – I don’t know who the guy it, but he has the most freaking amazing website I have ever seen!  Command-line junkie heaven.
  • Bluetooth Deadbolt – Something to keep my from having to carry another key?  Plus it lights up!
  • LibreOffice & Google Docs – Open, edit, and save/upload Google Docs directly in Open/Libre Office.  Also has support for WebDAV.
  • Switching from Photoshop to Gimp –  List of modifications to make the Gimp feel more natural for Photoshop users.
  • Industrial Strength Bubbles – Make person sized bubbles that last for several minutes.  I know what the kids and I are doing this weekend.
  • Open Energy Monitoring – Open source automation and energy monitoring.
  • Free your Android – As in Freedom, not beer.  Useful list of Free Software version of popular software on Android.
  • Shortcut Foo – Tutorials for quickly learning misc programming environment shortcuts.

But what’s my motivation?

Scripts in generally and bash in particular fill an enormous amount of my time.  The ability to create scripts that can handle a number of diverse inputs is directly related to how flexible and robust the code-base is.  The most common problem when handling files in Bash is the problem with spaces.  Linux is both case sensitive and handles spaces with less… grace… than some OSes.  Bash suffers from these same issues.  The easiest way to handle this is with the IFS system variable.  IFS is simply the field delimiter for Linux (i.e. white space) and, because it is a modifiable system variable, you set it to something that you will not run into.  For example:

#!/bin/bash
KEEPOLDVALUE=$IFS
IFS=$(echo -en “\n\b”)
for var in *
do
# Do something with each line of a file
echo “$var”
done
IFS=$KEEPOLDVALUE

That will solve the problem dealing with spaces when developing simple/basic scripts that are written for quick and dirty system management.  That said, when you are building scripts to use regularly you will need to do be a more comprehensive when testing your script.

A good place to start is by setting -u. Whenever testing new scripts, try running them without any arguments but WITH -u. If you fail to correctly initialize your variables running them with -u will warn you that there is a problem. For example:

$bash -u /tmp/mynewtestscript.sh
/tmp/mynewtestscript.sh: line 34: $DIRNAME: unbound variable

We can then verify that we have (at the very least) correctly initialized any variables that we will use and reduce the probability of side-effects.

A problem I ran into a lot with my early script creation was that I often needed standard output from one command to be sent to another command as command line input (as opposed to standard input.) The best way to solve this problem with using the bash built-in command execution form, for example:

echo $(ls)

But this isn’t always very elegant to implement directly, so another options is the wonderful xargs command.  xargs breaks the output of one command into individual arguments that it feeds to another command.  This allows you to use standard piping between otherwise un-pipeable commands.  For example:

ls | xargs echo

Sometime joining two vars can be complicated when those var names need characters between them.  To solve this you can you can use parameter substitution.  What this means, effectively, is that the var $tempvar and ${tempvar} are the same thing.  This allows you to combine variables with in-between characters without concern.

_a=”test”
_b=”/file”
newvar=${_a}folder${_b}

Another useful tip (found via this article from hacktux) is the mktemp executables for temporary file creation.  Needing a temp file to store intermediate data try the following:

tempfile=$(/bin/mktemp)
tempdir=$(/bin/mktemp -d)

Another common problem for bash scripts being used for administration is that they need to be run as root (or sudo root on Ubuntu systems.)  The way to solve this is to check the EUID environmental variable.  Root will always be 0 for EUID and you can put a simple check at the beginning of your script with the following:

if [[ $EUID -ne 0 ]]; then
echo “This script must be run as root” 1>&2
exit 1
fi

Need a random number of characters for your bash script?  Use dd and /dev/random to get a variable number of characters.  For example:

random=”$(dd if=/dev/urandom bs=3 count=1)”

Will give you three random characters (stored in $random) out of urandom current entropy pool.  Unfortunately the character  set is likely to be UTF-32 giving you a bunch of ?? symbols.  To convert those to base64 encoding just pipe the output through base64 (the conversion process may likely give you more than 3 characters to be sure to regexp to the number of characters you need):

random=”$(dd if=/dev/urandom bs=3 count=1 | base64)”

Killing Me Softly

Not sure how it was possible that I have not done this before but I recently realized I needed to forcibly remove a user from a login session on a remote Linux system and didn’t have a better idea than simply killing off all their individual system processes one at a time.  Thankfully, Linux provides a much more useful way of dealing with kicking users from terminal sessions (and thereby shutting down their entire process tree as well.)

$who -u

Will give you a list of user sessions based on which terminal they are logged into.  This includes X sessions, virtual terminals, remote sessions, and any text mode logins.  The output should looks something like this:

bobby    :0              2011-04-21 20:01   ?            12122
bobby    pts/0        2011-04-21 20:01   .             12405 (:0)
bobby    pts/1        2011-04-21 20:01 02:10       12322 (:0)
root        pts/2       2011-04-21 22:19 .                13887 (10.0.0.101)

You can then kill the session login by looking at the last column and killing that process ID.  In the example above you can see there are two virtual terminals (i.e. the pts/X sessions), a remote session (the ssh session I am remotely accessing the machine on from host 10.0.0.101), and a single local login on terminal session :0.  Because the terminal session must be responsible for starting the virtual terminals, you can simply kill the process 12122 force a logout of all three sessions.

$kill 12122

Entirely too easy.  If you would like to be kind (I am NOT) and actually warn your users that you are bout to kick them off, you can send them a system message using the standard Unix wall command.  If you type wall you will get an open text area to type your message (end the message by clicking Ctrl+d) or you can pipe a message to standard input like so:

$echo “My name is Inigo Montoya. You killed my father. Prepare to die.” |wall

Wall will send a system message to every terminal session that allows messages (if you are root, that means everybody.)

the whole world is here for me to see

A self inflicted hard reboot caused a bad block write on my Fedora 16 laptop the other day.  Usually this isn’t much of a problem (I have been using Linux everyday for about 12 years now) but I discovered something; something I didn’t know until AFTER the reboot.

Fedora 16 has changed to a using GPT disk labels instead of the old standard DOS partition labels.  While this is a HUGE improvement over a system that has been in place over 30 years now; it does lend to some problems when debugging issues because I have not used this format before.

So over the last couple weeks I have lived with some bad blocks and simply exited out of the rescue boot to complete the boot process (as I tried, without any luck, to fix my problem the old way.)  Well, the solution finally presented itself.  Not only will the updated Linux fsck command fix the problem now, but this solution will fix most system hard drive sector issues; and it is easy.

From the rescue command prompt type:

blkid

To identify the block partitions that are present on the system.  Read your crash error message and identify the bad partition by name; and then locate the partition name in the results of blkid.  Finally run:

fsck -y  /dev/mapper/root.hd-1

Replacing /dev/mapper/root.hd-1 with the full correct path of the drive partition name provided by blkid.  Then finish your boot and go back to the rest of your wonderful Linux experience.

The Hand That Feeds

I have been doing a fair amount of programming lately and undoubtedly this leads to counterproductive lifestyle behaviors while I am deep “in the code.”  Obvious examples include eating pizza three or more times a day, drinking a couple gallons of coffee (always black) at a sitting, failure to exercise (or even leave my chair for that matter), and listening to music at volumes that are generally reserved for the decks of aircraft carriers.  In most cases the music is some mix of metal, techno, or industrial (or in this case all three.)  Because of recent soundtracks for “The Girl with the Dragon Tattoo” and “The Social Network”, I have degenerated to listening to everything ever produced by Trent Reznor.  A habit, I am curtain, will continue until I burn it out of my system and move on to something moderately less abusive… like maybe The Prodigy??!?

So as not to be alone in my depravity, here is the current Top 11 list of favorite Nine Inch Nails/Trent Reznor songs in order of preference:

  1. The Hand That Feeds, With Teeth
  2. Dead Souls, The Crow Soundtrack
  3. Immigrant Song, The Girl With The Dragon Tattoo Soundtrack
  4. Head Like A Hole, Pretty Hate Machine
  5. In Motion, The Social Network Soundtrack
  6. Just Like You Imagined, The Fragile
  7. We’re In This Together, The Fragile
  8. Terrible Lie, Pretty Hate Machine
  9. Hurt, The Downward Spiral
  10. Closer, The Downward Spiral
  11. Only, With Teeth

And while it probably deserves its own list here is an excellent sampling of some of Trent’s slower tempo ballads:

  1. Hurt, The Downward Spiral
  2. Something I Can Never Have, Natural Born Killers Soundtrack
  3. Leaving Hope, And All That Could Have Been

Fear Lying Upon a Pallet

Almost all of of my recent work has been using NoSQL solutions, my favorite of which is Couchdb.  Easily the best feature of Couch is the RESTful JSON API that it uses to provide data.  Because you get your DB queries back to you directly as JavaScript objects, you don’t have to worry about application servers or middle tier systems for N-Tier development.  This is HUGE and make the whole web development (and given that most mobile applications are actually web apps) must cleaner, faster, and more functional for the end user.

Couch does have a couple weaknesses.  The one that has been giving me the most headaches is the lack of documentation for their parameters that the server can handle as part of the JSON View (map/reduce) phase. So here are a number that I have found useful over the last few months.  I will update this list as I find more.

  • key=abc The most commonly passed option to a given couchdb view.  This provides a way to select a single unique (well, I guess probably unique) key for a given view.  That said, view keys DON’T HAVE TO BE UNIQUE in couchdb.  Meaning, that if more than one result returns with abc this will also return those multiple results.
  • keys=[abc,123,ACC] A JSON encoded list of keys to use in a given map/reduce function.  Basically the same as above but without the need to call multiple network queries.
  • startkey=abc Used with endkey=abC to provide reange selection for a given view.  startkey will accept (as valid input) anything that would be valid in a standard couchdb view key, even JSON objects.  So think startkey=[a,]&endkey=[a,{}] to get a range of all keys=[a,somethingElse].
  • endkey=abC Counterpart of startkey, see the above reference.  One thing to note, it is generally better to specify another object and the end of a range if you want to inclusively select a range.  So {} is a better end range value than ZZZZZZZZ is.
  • limit=100 Select on the first N number of results.  This parameter is particularly useful for paginated return results (like “showing 1-100 of 399.)  Reduces network bandwidth for a given request.  Because map/reduce functions are cached upon request, the response time for the server isn’t any faster, but there is less data to download.
  • skip=100 Work with the above parameter limit to return a group result set.  For example you can limit the return result to 100 documents starting from 101 going through 200 (think email counts in gmail) with the ?limit=100&skip=100.
  • descending=true Reverses the return result order.  Will also work with limit, skip, startkey, etc…
  • group=true The default result for a given map/reduce function (which has been re-reduced) is a total, i.e. a single number.  In my case this is seldom the result I am actually looking for so this command provides the bridge between the full re-reduce and what I is most commonly sought, the groups result.  Your final results when this option have been passed it to return the reduced functions grouped by the map keys.  Instead of a single row with {key:null, value:9999} you will get multiple rows with the key being the name of the map key i.e [{key:”bob”,value:”444″},{key:”tom”,value:555}].  If you created design documents and view them inside of Futon, group=true is the default.  Which can be a little confusing when you actually try and make a JSON request and find you get a different result.
  • group_level=2 An alternative to the above parameter is the group_level option which will actual group the resulting reduce by the number of levels specified IF you key is an array matching at least that many arguments.  While the example above is for two levels the number can be as many array places as your key has.  This become particularly helpful when working with years and dates.  For a detailed example checkout this post.  That said, group=true is the functional equivalent of group_level=exact.
  • reduce=false Turn OFF the reduce function on a given map/reduce query.  This is the default if not reduce is defined but you can override it on views that DO have a reduce function if you only want the results of the map.
  • include_docs=true For map queries (that do not have a corresponding view) this option will include the original document in the final result set.  This means the structure of your JSON rows object will be {_id, key, value, doc} instead of the normal {_id, key, value}.  This will save you an additional JSON request if you are using the map query as a lookup for a particular database query.

Landed on Us

The new graphical boot splash for Linux is a program call Plymouth.  It provides feature like kernel mode graphics, flicker free boot messages, full boot logging, and… animation. The install is pretty simply, as the root user do the following:

yum -y install plymouth-theme-*
plymouth-set-default-theme –list (to see a list of all installed plymouth themes)
plymouth-set-default-theme nameOfMyTheme -R

Of particular note is that the -R is different from earlier installs of plymouth that required you run the command plymouth-rebuild-initrd.  Most tutorials online list the old way of rebuilding plymouth and following them will leave you with an unchanged system.

One of the nice features of plymouth is that the boot splash is loaded before the actual boot process when the initial RAM disk image is being loaded.  This means you get the pretty boot image while you are doing things like entering your hard drive decryption pass phrase.