The Foundation of Safety

I’ve been working on a tool to automate posting messages on WordPress from a Git repository. The project has been a lot of fun partily because I’ve been trying out Go for the first time, but also because it covers several different technologies at the same time. To post to WordPress I’ve been using it’s REST API. However, there is a significant issue with the current REST interface for WordPress. The way WordPress is configured by default, the entire user list for a given site is available via a non-authenticated GET request.  If you are running a site on a recent version of WordPress (it is installed by default on versions 4.7 and above) you can see for yourself with a simple curl request:

curl -X GET https://www.mysite.com/wp-json/wp/v2/users

This is simply not acceptable for most installations.  The best fix is to add a filter to the REST API itself that blocks the /wp/v2/users endpoint for any verb.  To do that, go to Plugins → Editor.  Under Select plugin to edit choose WP REST API and hit the Select button.  Under Plugin Files click on plugin.php.  In the file that come up, add the following code just after the add_filter/add_action function calls (around line 134.)

[php]
add_filter( 'rest_endpoints', function( $endpoints ){
  if ( isset( $endpoints['/wp/v2/users'] ) ) {
    unset( $endpoints['/wp/v2/users'] );
  }
  if ( isset( $endpoints['/wp/v2/users/(?P[\d]+)'] ) ) {
    unset( $endpoints['/wp/v2/users/(?P[\d]+)'] );
  }
  return $endpoints;
});
[/php]

Now click Update File and you should get a 404 response if you try your curl request again.

The sum of all fears

Update 5/25/2017: This is a post I started over a year ago.  In the interim Ubuntu has officially dropped the plan on a convergent desktop.  Mark Shuttleworth might argue that convergence will eventually happen but ultimately that doesn’t matter.

“In business being early, or being late, is the same thing as being wrong.”

Outstanding article over at TechRepublic discussing the lack of momentum that Ubuntu has had as of the last couple years.  The basic rundown is that the author believes that the long term goal of “the convergent desktop” is causing other less important goals to slip.

For those who haven’t heard of the convergent desktop (or simply convergence) it is the idea currently being chased by both Microsoft and Canonical (the company behind Ubuntu) whereby your phone/tablet can also be your desktop/workstation.  Sometimes this is associated with a seamless user experience that “transcends” both use cases (i.e. is the same environment on both platforms) but more often is based on some kind of modal shift when device size changes.  So for Windows 8, it become more Windows 8’y when on a phone, but feels a little more like Windows 7 when on a 22″ monitor with mouse.

Google is, of course, more concerned with turning everything into an extension of the web via Chrome and/or Android.  This means that they ultimately don’t care if it is a desktop or a phone running applications; as long as the data is stored in their cloud or provided by one of their services.  So what is Apple’s strategy concerning convergence?  Ahhh, now you get to the meat of the problem.

Apple, always laser focused on user experience, figured out a while ago that convergence SUCKS.  It really really does and here is a brief explanation why.

A great desktop experience is going to be focused on use cases where people are going to use desktop applications.  I call these users “creators” because they primarily use their computers for creative endeavors.  Think software development, editing photos, writing books, mixing music,  making spreadsheets, etc.

In this vein, the tools for creating are centered around the ability to produce new material.  Keyboards are spectacular input devices for creators.  I can type faster than I can write. Even though my ultra-book has a touchscreen, I never use it because my ten fingers are faster for creating things that a single pointing finger is.  When a fine grain control inside a two dimensional canvas is needed, a mouse is significantly better than either a touch screen or a touch pad.

A great tablet experience is going to be focused on use cases where people are not going to be creating.  I call these users “consumers”.  When using my tablet I am almost solely relegated to the role of consuming information.  Reading emails, watching Netflix, looking up receipts on Google, etc.  Consuming requires less functionality than production and added interface utilities for these edge usage cases would just take away from the user experience.

Now obviously most users spend some time during the day being both a consumer and a creator.  This is not a statement of the value of how a user uses their technology but an implicit realization that different use cases should be centered around how best to actually use their system.

It is hard to make a really functional sports car that can also be a useful pickup truck.  Trying to make one into the other generally causes you to have a tool that is good at neither.

Thunder on the Plains

ThunderPlains 2015 is over and overall I was impressed with the quality of the presentations, the overall event, but mostly the OKC community as a whole.  Particularly as this was only the second year of this event.

The day started with a significant announcement, CodeForOKC.org will hold its first meeting later this month.  Most people who know me, know that I am a small government libertarian; but I am a huge fan of local government (government is best when it is closest to the people it represents.)  This will give coders a chance to service their local community.  The first meeting will be on October 27th at 6:00 PM, check out the meetup.

Listed is some of the presentation material, links, and references mentioned by the presenters (at least for the sessions I attended.)

Mobile Applications with JS & Iconic

So Tell Me Again Why We’re not Using Node.js

Supercharge Your Productivity with Ember.js

Building Massive Angular Apps

Your Grandparents Probably Didn’t Have Node

The Importance of Building Developer Communities

Life is what you make of it

The most difficult aspect of software development for new programmers often has nothing to do with algorithm complexity or syntactical quarks; it’s all the other “stuff” associated with building, managing, and testing software systems.  When a developer steps into a existing business that already has a software stack to support the problem can be mitigated by relying on the institutional knowledge that the existing developers have formed over the course of maintaining their software.  I haven’t, in most cases,  had that fortune  in my career because either a) I was the companies first software engineer, or b) the existing software engineers had become proficient at a “alternative” software stack (and honestly, alternative is the kindest word I could come up with for Mainframe/Cobol.)

The longer a programming language has been around the more complex these build & management tools get.  The reasons are pretty simple.  The longer a language has been used, the more complex and more broad the uses of that language become.  Build tools generally start off pretty simple (make was originally an 8 line bash script for gods sake) but they must expand to cover more and more complex setups with more and more non-standard configurations.  In the most extreme cases the support tools even need to consider multiple platforms on multiple hardware configurations.  This problem can be exacerbated when a language needs to be “compiled” (and I use the term loosely) or works on “core” systems, meaning closer to the hardware, network, or data layer.*

C suffers from all of the above listed problems and more.  Having been around for around 40 years, in constant usage, on every platform ever made (super-computer to toasters), used for hardware drivers, operating systems, core network stacks, and even to create other programming languages; means that C can be the most complicated system ever supported by mankind.  I’m really not kidding about this.  More than one person has pointed out that the Linux kernel (95% pure C code) is many orders of magnitude more complex than sending a man to the moon is or even sending a woman to Mars will be.  Anyone who has had to create a Gnu build-chain supported C program from scratch has had to kill themselves learning the intricacies of make, automake, config, autoconfig, m4, autoreconfig, cmake, libtools, and autoheader.  Seriously, a “correct” Gnu C project with 1 header file and 1 c file has 26 buildchain files supporting it on initial setup.

Recently I have been doing some really interesting C development on micro-mobile platforms.  The first language I did significant (i.e. not a GWBasic MadLibs game) development on was C.***  My college experience with C was relegated to a couple hundred lines and using the up arrow to re-compile the program after changes.  Now my annoyance with with the autoconf build tools (and its many many gotchas) is replaced with the need to support cross-compiling, manage external libraries, and automate build deployments.  I have had to learn each of these tools and what it is they accomplish for me so I don’t have to re-invent the wheel.  Here are some of the more useful sources of information I have come across:

  • Gnu Autoconfig, Automake, and Libtools – by Gary V. Vaughan, Ben Elliston, Tom Tromey and Ian Lance Taylor.  Available as a Web Book it covers the entire build chain and practical usage of each of the tools.  Also does a great job of showing how modern technological development owes a huge debt to the flexibility and power these tools gave (and continue to give) C developers.
  • Gnu.org amhello – A “Hello World” tutorial for getting Autotools setup and configured in a simple project.  Great example for getting a full build setup running for C.  The full code of which can be found in the automake doc folder on Linux systems, generally something like /usr/share/doc/automake/amhello-x.x.x.tar.gz.
  • Clemson Automake by Example – Old article (the pages images are all broken) that walks through a simple C program and its build chain.  Excellent tutorial for getting a notice programmer setup with a distributable and effective build environment.
  • Autotools Mythbusters – Practical, if high level, overview of autotools and its associated components.  There is an Appendix with a list of examples that is particularly outstanding.  Think stackoverflow for autotools that has been aggregated into a Cookbook.
  • Simple Makefile Tutorial – A newbie guide to creating Makefiles for building software.  If I include code examples in the project documentation I will generally create a simple Makefile that will build the examples with a “make someexample”.
  • Martin Mann’s HowTo Autotools – The examples are in C++ but the step by step process to add functionality to the autotools build chain is outstanding.  Especially useful if you have figured out some of the basics already.

Finally, because setting up and creating the necessary files for getting a C project started in libtools/automake are so annoying, I decided to create a single file bash script to do the work for me.  You can find it as a gist on github.  You can download and run it by doing a:

wget http://tinyurl.com/brockers-cmaker -O ~/bin/cmaker && chmod +x ~/bin/cmaker

On the Linux command line.  Then create your new C/Autoconf project with cmaker init newprojectname.  My primary concern with the script was that is should need NO outside dependencies besides libtools/automake itself and that it has everything needed to start the autoreconf –install, ./configure, make process.  Hopefully I will add some additional functionality to it soon.

* As an example, look at Perl.  It initially started as a glue language to allow developers to piece together software solutions in a single language instead of having to create divergent sed, awk, and grep scripts in sh**.  Then the WWW took off and the little glue language became the core component of the most powerful websites on the planet.  Perl went from being a support language to the core language of all things http.  The number of tools exploded.  CGI.pm, mod_perl, and DBI gave the developer massive power but managing these libraries in production created a boom of support tools (kids these days forget that cpan was the ruby gems/bundler of its day.)

** As a side note that last sentence sounds more like a caveman grunting then a discussion of software development tools.

*** OK, ttechnically it was C++ but our CS chair was a former NASA Chief Engineer who basically taught us C using a C++ compiler… with a little Class thrown in.  I think my first object was linkedList with methods push and pop.

Virtual Private Networking in AWS

Been doing lots of VPN setup and configuration lately, especially inside of Amazon Web Service (AWS) Virtual Private Clouds (VPCs.)  They have a built-in VPN capability using IPSec but it generally seems specifically focused on device-to-device (D2D) configurations.  Depending on the need I have turned up StrongSwan and/or OpenVPN as a solution.

OpenVPN has an advantage of being able to do SSL VPN on 443 making it look exactly like HTTPS web traffic (effectively making it unbreakable by network administrators.)  Things like proxy-servers don’t even know you are creating a VPN tunnel.  However, on Windows OpenVPN client software has to be installed to use it.

StrongSwan is a IPSec VPN option that works well with existing P2P VPN systems.  The native Windows VPN tools work out of the box with a standard StrongSwan configuration (as long as your certs have been signed by a trusted CA.)  Performance is also very good.

So far, I really really like OpenVPN as once it is configured it works everywhere, regardless of network policy or ISP limitations.  Linux Network Manager has built in support for it making is very very easy to configure clients to use it as well.  That said, for IPSec configurations needing to connect to Windows Clients; StrongSwan has been my go-to solution.

Useful links follow:

Linux StrongSwan Server

Workstation StrongSwan Setup/Install Client

OpenVPN on Ubuntu

drift toward unparalleled catastrophe

My home configuration has two Planar 20″ monitors as my primary display.  They have worked fairly well with the exception that any sudden change in input signal seems to cause them to freak out and changed their sync levels to non-standard ranges.  Resetting them is the fix but Planar is kind enough to NOT mention how to do that in any of their documentation.  So, for the benifit of mankind here is the process for resetting a Planar PL2010MW to factory default>

Unplug the monitor.  Counting from the left, there are five buttons on the bottom of the monitor (the right most being the power button.)  Press the second and fourth buttons from the left and hold them down while plugging the monitor back in.  Count to five, and release.

Other models of Planar use the second and third buttons with variations of releasing immediately after plugging in; or waiting until the main power light turns green.  In addition, if you are using some versions of Linux you may have to restart X before you see the minor in your hardware setup.

Unless you continue to remember it

The dynamic device interface for Linux is called udev.  Generally it works without complaint or frustration but it does have some interesting side effects if you are doing more involved system configuration.  The one that tripped me up today is that udev keeps a record of every nic card that has been dynamically created during it’s lifetime.  For example, if you are using wireless USB nic (see my post yesterday) and you plug in a different one than you used before; the new nic ID is going to be wlan1 instead of wlan0.  Generally nobody would care; but in this case I did.  Thankfully modifying these records is pretty easy.  The device history is stored in /etc/udev/rules.d/70-persistent-net.rules and can be modified by hand.  Just change the wlan1 to wlan0 and delete the other entry.

Once again, text file configuration is FREAKING AWESOME!

The gap between the ones and the zeroes

Wireless configuration on embedded Linux systems has been pretty well documented for a while now.  If you are running a Desktop version of Linux then the probability of your wireless device being supported (either natively or through the WindowXP visualization layer NDIS) is likely to be transparent to you.  The situation is slightly different when you enter the embedded side of Linux where non-native driver support is really not an option.  That said, I have fallen in love with the Edimax Technology wireless USB nic (it uses the RealTek chipset) because they are smaller than my thumbnail, work with any Linux distribution you can think of (even Raspberry Pi), and  cost about  10 bucks.  Heck, they even support 802.11n.  To get this thing enabled/working on Debian from the command-line has been pretty simple.

apt-get install firmware-realtek
modprobe rt18192cu
ifconfig wlan0 up

Then iwlist wlan0 scan will show you a list of the available wireless networks.  Basically apt-get download the drivers, modprobe installs the the drivers, ifconfig turns on the wireless device (otherwise you get a wlan0 Interface doesn’t support scanning : Network is down when you try to scan the.  Not exactly the best error message, but anyway…

The evils which have never happened

I found this stupidly useful shortcut inside of cron.  Generally crontab entries look like this:

* * * * *  username dosomething

With the * corresponding to minute, hour day of month, month, day of week.  But cron also has a couple shortcuts that are useful for general system maintenance.  Specifically @reboot which replaces  ALL of the “*”‘s and will be run after each system reboot.  There is also a system wide directory under  /etc called cron.d which is wonderfully useful for package management because you can drop custom package cron jobs into the directory without directly editing the crontab file.

All of this information is well know among the Unix community as a whole and fairly well mentioned is about 10,000 different places.  Here is something that isn’t quite as easy to find but still ends up being pretty important…

File entries in cron.d cannot have a period in their name…. no file extension… no period separator… NOTHING… otherwise cron simply doesn’t run the file!!!

I just about killed myself debugging this one over the last two days. </crying>  Now if you will excuse me, I am going to drink my body weight in beer.

Frittered away by detail

My first reading of the http 2.0 draft proposal left me with the feeling that they were trying to address issues that are not really problems.  At least, not a problem unless you happen to be someone like Google or Cisco.  Part of what has made the internet so ubiquitous is the easy ability for people to see and understand the basic underpinnings of how everything works.  For example, I challenge you to find a developer who didn’t start their career by right-click -> View Source’ing a website. This is the very same reason that exceedingly popular web specifications are commonly NOT industry specifications. For example something like XML is so obnoxiously complex and excessive that it often seems like the only companies using (and making money) of such technologies are large institutional players like Oracle and IBM.  Instead start-ups, innovation creators, and entrepreneur continually choose things like JSON because it is simple and easy to make robust.  Honestly, I don’t know a single developer using AJAX that actually uses XML (the X in AJAX) because all it does is add size and complexity.

If you get the chance please read this great post by The Accidental Businessman.  It does a good job of explaining some of the issues I see in http 2.0 and what we are loosing by making a more “computer focused” internet.