Early Signs of a Coder

What is the single question that is most predictive of dev ability?   The best coders come from a very disparate set of backgrounds, so it’s hard to group them on a single criteria.

However, one characteristic that I have seen be consistently predictive:  starting at a young age.  Gates, Woz, and even Jordan did it.

I am not in position to say whether I make for a good, great, or gruesome coder (1).  But I know that my history with computers is epic:

  • 1993: Age 13. Get our first computer, a Packard Bell 486SX/33 with 4MB of RAM.  It came with a San Diego Zoo App, an encyclopedia, and a really lame racing app of some sort (not to mention all the Win 3.1 niceties).  (2)
  • 1993: Learn how to use DOS, configure games to use extended memory, figure out how windows system config files work
  • 1993: Discover QBasic.  Disappointed to learn that it doesn’t compile to exes.  Decide to write a choose-your-own-adventure game with it anyway, complete with ASCII graphics.
  • 1994: Start a BBS (search for “Bill Harding” in the list), get the most calls in Poulsbo/Bremerton/Silverdale area (not saying much)
  • 1994: Enroll in the first of two computer programming courses at local community college (4.0).
  • 1995: Wrote VBBS scripts to emulate about five most popular BBS flavors (PCBoard, OBV/2, Renegade, etc).  To my knowledge, no one else accomplished this.  It was like carving a pumpkin with a butter knife.
  • 1995: Finish the community college computer programming courses with 3.9 average
  • 1996: Discover I prefer girls to computers

From 1996 until late-college were the dark ages of Bill programming, wherein I spent the better part of my teens having forgotten about that whole computer thing. (3)

But the reasons coding was so magnetic to me are the reasons it still is:   a chance for me to dream up whatever I could think of, and then make it become real to feel the rush of creation.

It’s like what businesses are supposed to be:  you decide what’s important to you, and then go make that vision into reality.  But in business, there is constant compromise and pragmaticism that ultimately rules the day, unless you’re Steve Jobs.  For a competent coder, they can at any time create the best program (of a certain type) across all human history.  It could be used by thousands or millions of people.  If it’s important enough to spend years on.

Even in the more pragmatic here and now, I know of few other jobs with a comparable opportunity to build something that matters with one’s own two hands.  This is why I latched onto programming so tightly & immediately at age 13.

Most of these characteristics are present by early teens:  the desire to build things, the capacity to solve problems, the joy of seeing a project completed… I think of them as “born qualities;” thus, my hunch is that if the teenager has access to a computer, they’ll probably figure out pretty quickly if they’re a programmer or not.

Update:  interviewed about 50 UW CS students today. The earliest coder of the bunch started in his late teens. The vast majority started when they entered the program (??!!).  So perhaps this theory is bunk, or these kids all suck.

(1) I suppose that having built Bonz is strong circumstantial evidence that I’m not gruesome

(2) Note how I remember the software that came with my first computer in as much detail as my first kiss

(3) Not entirely true; I did still fix old people’s computers at exorbitant prices (consistent with what other computer fixers charged). But mostly the cute girls beat the computers in this round.

Rails Exception Handling and Notification with Errbit

Bonanza has travelled a long road when it comes to trying out exception handling solutions. In the dark & early days we went simple with the Exception Notification plugin. The drawbacks of it were many, starting with the spam that it would spew forth when our site went into an error state and we’d end up with thousands of emails. There was also no tracking of exceptions, which made it very difficult to get a sense for which exception was happening how often.

Eventually we moved to HopToad (now Airbrake). It was better, but lacked key functionality like being able to close exceptions en masse or leave comments on an exception.

From there we moved to Exceptional, which we ended up using for the past year.  It was alright, when it worked.  The problem was, for us, it frequently didn’t work.  Most recently, we spent the last week having received two exceptions reported by Exceptional, when New Relic clearly showed that hundreds of exceptions had happened over that time period.  Also damning was the presentation of backtraces, which were hard to read (when present), as well as an error index page that made it difficult to discern what the errors were until they were clicked through.

Enter Errbit.  Jordan found this yesterday as we evaluated what to do about the lack of exceptions we were receiving from Exceptional.  Within a couple hours, he had gotten Errbit setup for us, and suddenly we were treated to hundreds of new exceptions that Exceptional had silently swallowed from our app over the past year.

But it’s not just that Errbit does what it is supposed to — it’s the bells and whistles it does it with.

Specifically, a handful of the features that make Errbit such a great solution:

  • Can set it up to email at intervals (e.g., 1st exception, 100th exception) so you hear about an exception when it first happens, and get reminded about it again later if it continues to be a repeat offender
  • Allows exceptions to be merged (or batch merged) when similar
  • Allows comments by developers on exceptions, and shows those comments from the main index page so you can quickly see if an exception is being worked on without needing to click through to it
  • Easy to read backtrace, plus automagic tie-in to Github, where you can actually click on the backtrace and see the offending code from within Github (holy jeez!)
  • Liberal use of spacing and HTML/CSS to make it much easier to read session, backtrace, etc relative to Exceptional and other solutions we’ve used
  • Open source, so you can add whatever functionality you desire rather than waiting for a third party to get around to it (a fact we’ve already made use of repeatedly in our first two days)
  • Open source, so the price is right (free)

Simply put, if you’re running a medium-to-large Rails app and you’re not using Errbit, you’re probably using the wrong solution.  Detailed installation instructions exist on the project’s Github home.

Fix: IRB pasting is super slow, typing in ruby debugger has lag

After numerous hours spent trying unsuccessfully to fix this problem by following the instructions outlined on a few StackOverflow posts, Jordan presented a recipe for fixing this problem (that actually fixed the problem) today.

The essence of the issue is that the readline package that gets installed with REE is by default some bastard version that lags, at least on our Ubuntu and Mint installations. Installing the rvm readline package did not fix it for either of us, nor did an assortment of experiments on compiling REE with different options. Here’s what did:

$> sudo apt-get remove libreadline6-dev
$> sudo apt-get install libreadline-gplv2-dev
$> rvm remove ree
$> rvm install ree

One problem you may encounter is that if you’re avoiding newer versions of Ubuntu until admit defeat about Unity, the “libreadline-gplv2-dev” package is by default only present in Oneiric and above. Here’s where I found the package versions that worked with Maverick: https://launchpad.net/~dns/+archive/test0/+sourcepub/2252776/+listing-archive-extra. After downloading the packages from this link, the install sequence became

$> sudo apt-get remove libreadline6-dev libreadline5
$> sudo dpkg -i libreadline5_5.2-9~maverick0_amd64.deb
$> sudo dpkg -i libreadline-gplv2-dev_5.2-9~maverick0_amd64.deb
$> rvm remove ree
$> rvm install ree

Create a Custom Thumbnail Processor in Carrierwave

The latest in my of “shouldn’t this be better covered in Google and docs if people really use CarrierWave?”-series. Creating a custom thumbnail processor in Carrierwave is pretty straightforward, but not from any search query I could construct. The gist:

class MyUploader < CarrierWave::Uploader::Base
  version :custom_thumbnail do
    process :some_fancy_processing
  end

  def some_fancy_processing
    # Here our context is the CarrierWave::Uploader object, so we have its full
    # assortment of methods at our disposal.  Let's say we want to open an image with
    # openCV and smooth it out.  That would look something like:
    cv_image = OpenCV::CvMat.load *([ self.full_cache_path ]) # See past blog post on the origins of full_cache_path
    cv_image.smooth(5,5)

    # The key to telling CW what data this thumb should use is to save our output to
    # the current_path of the Uploader, a la
    cv_image.save_image(self.current_path)
  end
end

Just call #custom_thumbnail.url on your Uploader instance, and you should get the path to the custom result you created.

Using this framework you should be able to get CW to perform whatever sort of custom image processing you want. Thanks to these fellows for helping me decode the #current_path magic here.

Howto: Store a CarrierWave File Locally After Uploading to S3

Have now needed to do this twice, and both times have required about an hour of sifting through Google morass to figure out how to pull this off. The situation we’re assuming here is that you have some CarrierWave file that you’ve stored to a remote location, and you want to get a copy of that file locally to manipulate it. With promising method names like “retrieve_from_cache!” and “cache!” and “move_to_cache” you too may become entangled in the maze of what the hell you’re supposed to be calling to accomplish this. Here’s what.

Step 1: Retrieve from your store (the external place your file exists) to cache (your local machine).

my_cw_uploader.cache_stored_file!

After running that, if you call

my_cw_uploader.cached?

You should get a long string that is the filename for the file that got cached. Note that the cache status of a file is only saved on a per-object basis, so if you try something like

Image.find(1).my_cw_uploader.cache_stored_file!

… you will never again see the object that did the caching, and you will never be able to use that cache. Only call #cache_stored_file! on an object you are keeping around.

Step 2: Go access the file that you cached. Not quite as easy as it sounds. For this, I mixed in the following into my uploader class

module CarrierWave::Uploader::Cache
	def full_cache_path
		"#{::Rails.root}/public/#{cache_dir}/#{cache_name}"
	end
end

So now when you call

my_cw_uploader.full_cache_path

You’ll get the full pathname to the file you downloaded.

Hope this helps save someone else from an hour of Google hell.

Ubuntu, Dark Side of Simplicity

The following is my take on how the current passion for UI “simplicity” may be to blame for Unity, the downfall of Ubuntu as I’ve known & loved it. It’s wide-ranging, so I won’t be able to fully substantiate its every point, but if you simply click the area you’d like to know more about, I will wish that you were being shown detailed examples in support of my point. Many of the newly introduced usability issues in Unity are shared by Gnome3, so it seems that now is as good a time as ever for people who care to try to persuade some Linux distro to retain the high standard of developer usability we’ve become accustomed to.

#—————————

Steve Jobs broke my OS, and I don’t even use a Mac. It began about 10 years ago, around the time Jobs had re-joined Apple, and the software industry was smitten with building UI that had every button and link you could need. “If they might need it, why not include it?” seemed to be the effective rationale.

good-ole-days

Windows XP represented the pinnacle of the art; complete with a “Start” button, that, when clicked, exploded into two columns. These columns in turn had menu options like “All Programs” that could themselves balloon out to several more (overlapping) columns. In the case of the “All Programs” specifically, the user was treated to an unordered list of every program they had ever installed (often hundreds). It was so…terrible…yet quick to an advanced user (e.g., those that figured out how to sort it). For new users, well, you could probably figure out some of the basics within a week.*

But soon people began to decide this was arrangement was not ideal. Or even OK. I noticed this in full force with the release of the first iPhone. It was a device that was so stripped down that it didn’t include a feature (secure network access for business email) that would could have increased its user base significantly. It launched anyway, was quickly dubbed the “Jesus Phone,” and has managed to sell a couple gajillion units since.

Gone forever were the days of “the most commercially successful” products were “the most feature-full” ones.

This evolution, which I’d pin as starting around 2006 (first iPhone) has continued expanding its base of believers to present day. Now, in addition to the set of Apple devices, the default aesthetic for web consumer products has become “huge buttons / huge text / nothing that requires reading.”

In the context of the web, I think that this growing obsession with simple UI is usually a great thing. Like it or not, our attention is fragmented and life’s too short to read the manual for a product that I simply want to entertain me (see: Twitter, Instagram).

The problem is when the momentum for this trend** pushes it out to use cases where it makes no sense. Sometimes, a detail-rich interface is required to get the job done efficiently. In the case where an app is used by novice and sophisticated*** users, a balanced curve of “level of detail shown to user” vs “user expertise level” might look something like this

balance_complexity1

That is, this balanced approach would dictate that novice users were exposed only to the most essential 10-20% of all UI complexity. The UI should appear very basic to them. As the user’s needs become more sophisticated, the UI reveals contextual choices and/or configuration menus that accommodate their needs as power users of the product. Novice users are happy because they don’t see the complex pieces. Sophisticated users are happy because they can use it with maximum productivity so long as they’re willing to read a handful of configuration menus.

Products rarely end up this balanced. Windows XP threw the user in the deep end, both in terms of the learning complexity, and the vast sea of choices/links presented to even the notice user. OS X freed users from this soup of links and options, though before they got smart about context sensitivity, it often came at the expense of more clicks.

Ubuntu, pre-Unity, was arguably even worse than XP to the poor novice:

balance-pre-unity

No oversized buttons or contextual UI reveal here. The reason the project thrived was only because the Ubuntu audience is made up largely of users who have advanced expectations for their OS. Many are programmers. They have to juggle IDEs, web browsers, web browser tools, and a smattering of terminal shells. Usually across multiple high resolution monitors, over multiple workspaces. To them, if complexity is the price that must be paid for configurability, then it shall be paid****.

This isn’t the sort of thing a novice will understand, let alone feel comfortable with, but the software did meet the needs of it sophisticated user base.

Then came tablets, the momentum of simplicity, and Ubuntu’s loving ode to it all: Unity.

balance-post-unity

Because this blog needs to get finished before tomorrow morning, I am forced to gloss over a detailed analysis of the functionality lost between pre- and post-Unity versions of Ubuntu. A couple salient examples include: well-integrated dual monitor support, multiple/configurable taskbar support, desktop customization, and ability to keep program menus within each program. For the quantitative-minded amongst you, the compared market share of Ubuntu vs. Mint makes the point more compellingly than my mini-list.

If it’s not already obvious, I love the trend toward simplicity. It was one of the main points of opportunity I saw in starting Bonanza back in 2008 — we sought to build a version of eBay that would be usable to busy people and non-experts. Simplicity continues to be something that I push for as often as anyone on my team, and I think it continues to be a big difference between our platforms.

But I don’t believe that “simplicity” should be the same thing as “dumbed down,” and I wonder if Unity’s pared down featureset could be a result of the Ubuntu designers mistakenly conflating “simple” and “feature-sparse”?

tl; dr

“Simple” and “effective” are closely related for novice users and for simple products. But they can be inversely related when “simple” gets in the way of “configurability,” which begets effectiveness for power users. In the case of Ubuntu, the users are largely geeks who use complex features to maximize productivity. Give the pared-down version of Ubuntu to novice users, but don’t let it rob the majority of your users the functionality they need.*****

Update: Finally inspired to post this to HN after reading Linus’ comments about Gnome3 being a detriment to usability. Given that Gnome3 has traveled a very similar path to Unity in terms of degrading the user experience (for sophisticated users) with its newest release, I am hoping that perhaps a sympathetic designer of Unity or Gnome3 might find this.

Footnotes

* Though as a computer repair guy, I often saw the concept take far longer to sink in. And don’t even get me started on trying to teach my grandparents exactly what a file system was and “what it did”

** Regarding use of “trend” to label the simplicity movement: I mean only that it is influencing all corners of design (web, native apps, mobile, and beyond) — not that it is ephemeral or irrational.

*** “Sophisticated” here means “more advanced,” or “more demanding,” not somuch the “better looking” or “more expensive” connotations of the word.

**** Of course, something doesn’t need to be complex to be configurable. Progressively revealed / contextual UIs can often deliver much of the best from both worlds. But it’s also easy to get implement rather intricate revealing schemes incorrectly and be worse off than if you had simply built a cluttered but static interface.

***** What makes it doubly insulting is that until Ocelot, we could get the functionality we needed by choosing the “Classic Ubuntu” login. What explanation is there to chop a feature that’s already been built…and provided the main lifeline to advanced users after Unity’s release?

Fix it: Ubuntu CTRL-SHIFT-V Won’t Paste Into Terminal

Ugh, just spent an hour traveling from forum to forum trying to figure out why I couldn’t CTRL-C in Rubymine and CTRL-SHIFT-V into terminal. As many forum posters were eager to point out, it is possible to use CTRL-SHIFT-INSERT or middle click to paste into terminal, just not CTRL-SHIFT-V. Unfortunately, those workarounds were not OK since my insert key is in the arctic circle of my keyboard, and I don’t want to touch the mouse.

Luckily, the folks at Rubymine helped me figure out the answer where Google couldn’t.

The problem is that when running Rubymine via OpenJDK, the clilpboard used is not compatible with the clipboard used in terminal. The solution was to run Rubymine with Oracle Java instead. In Rubymine, this is a matter of first installing Oracle Java (I’ll let you Google that one, you have to add a repository) and then adding the following lines to the beginning of your Rubymine startup script:

export JAVA_HOME=/usr/lib/jvm/java-6-sun
export JDK_HOME=$JAVA_HOME
export RUBYMINE_JDK=$JAVA_HOME

After that you should be golden. In my hour of Googling I read that many other IDEs (Netbeans in particular) seem to be subject to the same problem with CTRL-SHIFT-V not working. I’d reckon that if these users were to change their application to use Oracle Java it would probably resolve the problem in other IDEs as well.

Linux Mint Firefox & Chrome :: Remove Search Branding

I’m all for Mint Linux making some bucks via their Chrome and Firefox searchers, but not if it comes at the expense of basic usability. <quickie rant> If I were the Mint maintainers, I’d take a long look at whether it was desirable (let alone essential) that they hijack my CTRL-K functionality and replace standard Google results with their poorly formatted, functionality-impaired substitute.</quickie rant>

Anyhow, if you are here, you’re probably trying to figure out how to remove the Mint branded search from Firefox and/or Chrome. And I’m here to tell you how.

Remove Search Branding from Firefox

  1. Click on the Google search icon in your title bar
  2. Click “Manage Search Engines”
  3. Click the link to “Get more search engines”
  4. Choose a Google, any Google, from Mozilla’s choices. I chose Google SSL, which worked nicely.
  5. After you install your Google SSL (or other version of Mozilla version of Google), click “Manage Search Engines” again, and move your new Google Search to the top of the list.
  6. Voila!

Remove Search Branding from Chrome

There are probably an assortment of ways to accomplish this. I chose to Google “Chrome deb package” which led me to Google’s official distributions of Chrome, which can be found here. After following Google’s instructions to install my Chrome package, all was well (though that meant that I was running “Chrome” rather than “Chromium.” Whatevskis.)

Other than the annoying search stuff, so far Mint Linux seems to be an easy-to-setup iteration on the developer utopia that Ubuntu was built as, before it decided to go the way of the mandatory Unity.

Ruby debugger list local variables

Short and sweet. You want to list local variables in ruby-debug? Try

info locals

You can also get ruby-debug to list your stack, instance variables, and much more. Type in plain old “info” into debugger to see a full list of what ruby-debug is willing to reveal to you.