Best Linux Git GUI/Browser

Since moving to Linux, I’ve also moved to Git (from SVN) and have found it to be a reliable friend that, as a technology, is a significant step up from SVN. But, as a usable productivity tool, I definitely felt the sting of Git’s “hardcore h4x0r” roots in its lack of a GUI that is in the same league as Tortoise SVN. rubyminegitcrop

But there is hope. And it comes from the unlikeliest of sources: my IDE, Rubymine.

Rubymine’s Git integration is superb. It supports hierarchical browsing of your current branch, in exactly the manner of Tortoise. It also offers:

  • Directory-based, graphical means to be able to revert changed files or directories
  • Ability to see a history of changes to a file (and return to an older specific version, if desired) along with one-click access to visual diff
  • Ability to mass merge files in different branches by batch selecting them and choosing a merge method (i.e., “Use branch A version” or “Use branch B version”… it also supports manual merging via a graphical merge tool)
  • One click comparison of your current file to the current repository file

In a nutshell, nearly all of the efficiencies that TortoiseSVN provided as a graphical source control tool for Subversion, Rubymine provides for Git. With one exception — that I have implored the creators of Rubymine to consider adding — the ability to see the history for a directory (rather than a file) within your project. Knowing the crack team of Rubymine developers, the feature will probably be on the way soon, but even before its arrival, they’ve still managed to build the best pound-for-pound Git graphical interface I’ve been able to uncover.

Git move commit to another branch

Rubymine’s stellar Git integration means that I seldom have to tinker with Git on the command line, but an exception to this is when I switch branches and forget to switch back before making my next commit. D’oh!

The answer is git cherry-pick.

The syntax is simple:

git cherry-pick [commit ID]

You can run “git log” to get the ID of your commit. You can also use a graphical tool like Giggle, which lets you see all commits to all branches.

If you had the misfortune of your checkin not being on any given branch, you can run “git reflog” to see all checkins to all branches, and merge your master branch with the fake branch that your checkin went to. See Stack Overflow for more details on what to do in this scenario.

Javascript play sound (wav, MP3, etc) in one line

Alright, enough with the 10 page tutorials from 2006 describing detailed browser-specific implementation tricks to get sound playing. Google needs a refresh. In my experience in the 2010’s, here’s all you need to do to make a simple sound play in your web page:

Step one: add an element to your page

Step two: play a sound via that element

document.getElementById("sound_element").innerHTML= 
"";

Or, if you’re using jQuery:

$('#sound_element').html(
"");

I’ve tried it in Firefox, Chrome and IE and it works like a charm for me. I’d imagine that your user has to have some basic sound software installed in their computer, but at this point, I’d reckon 99% of users do.

Feel free to add to the comments if you find any embellishments necessary to get this working.

Savage Beast 2.3, a Rails Message Forum Plugin

Savage Beast 2.0 has been the de facto solution for those looking to add a message forum to their existing Rails site, but it was created more than a year ago, and had many aspects that tied it to Rails 2.0. Also, it relied on the Engines plugin, which is not the most lightweight plugin. Although Engines doesn’t seem to affect performance, it did rub some people the wrong way.

After a year’s worth of promises that an update was “coming soon,” an update has finally arrived and is now available at Github.

Detailed instructions on getting it rolling with Rails 2.3 follow.

Installation

Currently, the following is necessary to use the Savage Beast plugin:

  1. The Savage Beast 2.3 plugin. Go to your application’s root directory and:
    script/plugin install git://github.com/wbharding/savage-beast.git
  2. Most of the stuff you need to run Beast…
    • Redcloth: gem install Redcloth. Make sure you add “config.gem 'RedCloth'” inside your environment.rb, so that it gets included.
    • A bunch of plugins (white_list, white_list_formatted_content, acts_as_list, gibberish, will_paginate). If you’re using Rails 2.2 or earlier, you’ll need the Engines plugin, if you’re on Rails 2.3, you don’t need Engines. The easiest way to install these en masse is just to copy the contents of savage_beast/tested_plugins to your standard Rails plugin directory (/vendor/plugins). If you already have versions of these plugins, you can just choose not to overwrite those versions
  3. Go to your application’s root directory and run “rake savage_beast:bootstrap_db” to create the database tables used by Savage Beast. If it happens you already have tables in your project with the names Savage Beast wants to use, your tables won’t be overwritten (though obviously SB won’t work without its tables). To see the tables Savage Beast uses, look in lib/tasks/savage_beast.rake in your Savage Beast plugin folder.
  4. Next run “rake savage_beast:bootstrap_assets” to copy Savage Beast stylesheets and images to savage_beast asset subdirectories within your public directory.
  5. Implement in your User model the four methods in plugins/savage_beast/lib/savage_beast/user_init that are marked as "#implement in your user model
  6. Add the line “include SavageBeast::UserInit” to your User model. Location shouldn’t matter unless you intend to override it.
  7. Add the line “include SavageBeast::ApplicationHelper” to ApplicationHelper within your application_helper.rb file.
  8. Implement versions of the methods in SavageBeast::AuthenticationSystem (located in /plugins/savage_beast/lib) in your application controller if they aren’t already there (note: technically, I believe only “login_required” and “current_user” are necessary, the others give you more functionality). Helpful commenter Adam says that if you have the “helper :all” line in your application controller, be sure to add the “SavageBeast::AuthenticationSystem” line after that.

If you’re using Rails 2.0-2.2, and thus using the Engines plugin, you’ll need a couple extra steps:

  1. Add this line to the top of your environment.rb, right after the require of boot: require File.join(File.dirname(__FILE__), '../vendor/plugins/engines/boot')
  2. Move the routes.rb file from the “savage-beast/config” directory to the root (“savage-beast”) directory of the plugin. Then add the line “map.from_plugin :savage_beast” to your routes.rb. Location shouldn’t matter unless you intend to override it.

And off you go! When you visit your_site/forums something should happen. I’ve been creating new forums by visiting /forums/new. There’s probably a hidden admin view somewhere.

Implementing Your Own Views and Controllers

Just create a new file in your /controllers or /views directories with the same name as the file you want to override in Savage Beast. If you just want to override a particular method in a controller, you can do that piecemeal if you just leave your XController empty except for the method you wanted to override (Note: I know this piecemeal method adding works with the Engines plugin installed, but haven’t tested it without).

If you’re integrating this into an existing site, I’d recommend you start by creating a forums layout page (/app/views/layouts/forums.html.erb). This will give you a taste of how easy it is to selectively override files from the plugin.

Demo

You can check out a (slightly-but-not-too-modified) version of Savage Beast online at Bonanzle. The differences between our version and the version checked into Subversion are 1) addition of topic tagging (users can tag topics to get them removed, etc) 2) recent post list shows posts in unique topics, rather than showing posts from the same topic repeatedly (there’s another blog on here about the SQL I used to do that) and 3) skinning. None of those changes feel intrinsic to what SB is “supposed to do,” which is why they aren’t checked in.

Conclusion

Comments are most welcome. I’ll be checking in changes to the project as I find bugs and improvements in using it, but this is admittedly something I don’t have a lot of spare time to closely follow (see my other entries on the wonders of entrepreneurship). Hopefully others can contribute patches as they find time. If you like the plugin, feel free to stop by Agile Development and give it a rating so that others can find it in the future.

Rails Slave Database Plugin Comparison & Review

Introduction

Based on the skimpy amount of Google results I get when I look for queries relating to Rails slave database (and/or the best rails slave database plugin), I surmise that not many Rails apps grow to the point of needing slave databases. But we have. So I’ve been evaluating the various choices intermittently over the last week, and have arrived at the following understanding of the current slave DB ecosystem:

Masochism

Credibility: Was the first viable Rails DB plugin, used to rule the roost for Google search results. The first result for “rails slave database” still points to a Masochism-based approach.

Pros: Once-high usage means that it is the best documented of the Rails slave plugins. Seems pretty straightforward to initially setup.

Cons: The author himself has admitted (in comments) that the project has fallen into a bit of a state of disrepair, and apparently it doesn’t play nice with Rails 2.2 and higher. The github lists multiple monkey patches necessary to get it working. It only appears to work with one slave DB.

master_slave_adapter

Credibility: It’s currently the most watched slave plugin-related project I can find on github (with about 90 followers). Also got mentioned in Ruby Inside a couple months ago. Has been updated in last six months.

Pros: Doesn’t use as much monkey patching to reach its goals, therefore theoretically more stable than other solutions as time passes.

Cons: Appears to only handle a connection to one slave DB. I’m not sure how many sites grow to the point of needing a slave DB, but then expect to stop growing such that they won’t need multiple slave DBs in the future? Not us. There’s also less support here than the other choices for limited use of the slave DB. This one assumes that you’ll want to use the slave for all SELECTs in the entire app, unless you’ve specifically wrapped it in a block that tells it to use the master.

Db Charmer

Credibility: Used in production by Scribd.com, which has about 4m uniques. Development is ongoing. Builds on acts_as_readonlyable, which has been around quite awhile.

Pros: Seems to strike a nice balance between the multiple database capabilities of SDP and the lightweight implementation of MSA. Allows one or more slaves to be declare in a given model, or for a model to use a different database entirely (aka db sharding). Doesn’t require any proprietary database.yml changes. Didn’t immediately break anything when I installed it.

Cons: In first hour of usage, it doesn’t work. It seems to route most of its functionality through a method called #switch_connection_to, and that method doesn’t do anything (including raise an error) when I try to call it. It just uses our existing production database rather than a slave. The documentation for this plugin is currently bordering on “non-existent,” although that is not surprising given that the plugin was only released a couple months ago. Emailed the plugin’s author a week ago to try to get some more details about it and never heard back.

Seamless Database Pool

Credibility: Highest rated DB plugin on Agile Web Development plugin directory. Has been updated in last six months.

Pros: More advertised functionality than any other slave plugin, including failover (if one of your slaves stops working, this plugin will try to use other slaves or your master). Documentation is comparatively pretty good amongst the slave DB choices, with rdoc available. Supports multiple slave databases, even allowing weighting of the DBs. And with the exception of Thinking Sphinx, it has “just worked” since dropping it in.

Cons: Tried to index Thinking Sphinx and ran into difficulty since this plugin redefines the connection adapter used in database.yml*. The changes needed to database.yml (which are quite proprietary), make me suspicious that this may also conflict with New Relic (which detects DB plugin in a similar manner to TS). Would be nice if it provided a way to specify database on a per-model basis, like Db Magic. Also, would inspire more confidence if this had a Github project to gauge number of people using this.

Conclusion

Unfortunately, working with multiple slave databases in Rails seems to be one of the “wild west” areas of development. It’s not uninhabited, but there is no go-to solution that seems ready to drop in and work with Rails 2.2 and above. For those running Rails 2.2+ and looking to use multiple slaves, Db Magic and Seamless Database Pool are the two clear frontrunners. I like the simpler, model-driven style plus lack of database.yml weirdness of Db Magic. But I really like the extra functionality of SDP. At this point, our choice will probably boil down to which one gives us the least hassle to get working, and that appears to be SDP, which worked immediately except for Thinking Sphinx.

I’ll be sure to post updates as I get more familiar with these plugins. Especially if it looks like there is any intelligent life out there besides me that is attempting to get this working.

Update 10/13: The more I use SDP, the more I’m getting to like it. Though I was initially drawn to the Db Magic model-based approach to databases, I now think that the SDP action-based approach might make more sense. Rationale: Most of the time when we’re rendering a page, we’ll be using data from models that are deeply connected, i.e., a user has user_settings and extend_user_info models associated with it. We could end up in hot water if the user model used a slave, while the user_settings used the master and extended_user_info used a different slave, as would be possible with a model-based slave approach. SDP abstracts away this by ensuring that every SELECT statement in the action will automatically use the same slave database from within your slave pool.

Also, though I didn’t notice it documented at first, SDP is smart enough to know that even if you marked an action to read from the slave pool, if you happen to call an INSERT/UPDATE/DELETE within the action, it will still use the master.

* Thinking Sphinx will still start/stop with SDP, it just won’t index. Luckily for us, we are already indexing our TS files on a separate machine, so I’ll just setup the database.yml on the TS building machine to not use SDP, which ought to solve the problem for us. If you know of a way to get TS to index with SDP installed, please do post to the comments below.

jQuery XMLHttpRequest Object Documentation

You’d think that one of the most commonly used objects in jQuery would be well-documented and readily Google-able. But if it is, I’m searching for the wrong stuff. Below I’ve posted some screen captures of what lurks within XMLHttpRequest. I generated these by running “console.log(request)” on an XMLHttpRequest object named “request”.

Top level accessors and functions (click for full size):

xmlhttprequest

Most of the juicy stuff is inside .channel (again, click for full size):

xmlhttprequest_channel

Unfortunately, it seems that Firefox or jQuery considers it a breach of security to actually access any of this juicy info stored in .channel.

Buying Dell servers? Consider the hidden costs.

The time is currently 5:30am. Normally, this would be the sort of time that I would enjoy sleeping. However, I have instead spent this evening, or morning, or whatever-you- call-it, working with our hosting team to fix a server that refused to start after installing a replacement part given to us by Dell. We finally got the server running again about an hour ago, and I’m now waiting for our slave database (corrupted during the crash) to re-sync. If I’m lucky I’ll hit the sack before sunrise.

While it’s always difficult to conclusively assign blame with a hardware problem, I think it is pretty safe to blame this predicament on a replacement Dell part we installed about two days ago. Prior to installing the replacement, this server would crash every 2-4 weeks, stating that its disks had become detached. We had an identical server, with identical software, that has yet to crash since we purchased it, so a hardware failure was the most reasonable explanation. Google search results on our error corroborated this. So, hesitantly, I picked up the phone and dialed Dell support.

The worst-case scenario would be that they spend hours arguing with me and making me go through rote debugging tasks despite the numerous facts I’d accumulated that all pointed squarely at RAID controller. The best-case scenario was that they looked at my purchasing history of almost $20k in hardware over the last year, noted that this is the first time I have ever asked for a replacement part, and give me the benefit of the doubt just this once.

Dial Dell. Quick arrival at support guy. Explain situation to support guy. Support guy starts reading debugging script. D’oh! Worst-case scenario begun.

As any business owner/IT manager can tell you, there is a very tangible time/cost equation that can be applied to any hardware debugging scenario. The question that one has to ask oneself, when entering into a support debugging script they know is unnecessary, is whether the 2-3 hours it will take to complete the tasks, multiplied by the uncertainty that the support person will agree a replacement is necessary, is less than the cost of just re-buying the part. When my hard drive failed on my Dell desktop, the answer quickly became “no” after I spent an hour on the phone with the tech over a hard drive that was probably worth $100 (I ordered the part off Amazon and my computer has worked fine since). It was my cautious hope that my buying history might earn me a more dignified treatment this time around. But after an hour on the phone with the tech, it quickly became clear that I was yet again headed down a multiple-hours debugging path for a part that would cost only $200-$300.

Then, a break of light. I learned of the Dell FastTrack program, which, after a test “certifies” that a person isn’t a dunce, allows them to order their own parts without the support script. A great solution to a hard problem. I quickly signed up for the program. Though it took me probably about 5 hours to complete it, I justified the time expense as a pre-payment on a lifetime of savings in support scripts.

Shortly thereafter, I became certified, ordered my replacement part, and it arrived only two days later. Great turnaround!

Then, today happened. Here is my rough table of the different paths I could have taken to get our server problem fixed:

Option Initial Cost Future cost Total cost
Buy a new RAID card. $250 None. Because new parts work. $250
Persuade Dell support person that you deserve a replacement part 2-3 hours of time 5-10 hours of IT debugging time * $100-$200 (after refurbished part fails) 2-3 hours of life + $500-$2000
Get Dell certified, order replacement part 5 hours of time 5-10 hours IT debugging time * $100-$200 hour (after the refurbished part fails) 5 hours of life + $500-$2000

The most maddening part of this story? That it makes perfect economic sense for Dell. Other than customer fury (which generally has no tangible cost), what does it matter to them if the replacement part doesn’t work? Why not just ship every returned part out to another customer, just to be sure that it is “really” defective? They kill two birds with one stone: they don’t have to spend money on a new replacement part, or in the worst-case scenario, they get a free test of their hardware from the unwitting consumer. Heck, maybe they send the replacement part, which fails, and the consumer gets so frustrated they start buying new parts instead of bothering their support team. They save time and make money.

Unfortunately, today I was the unwitting consumer. Rather than spending the $250 to buy a new part, I believed that Dell would send me a replacement that worked. Instead, it failed catastrophically two days after we installed it, and I was out 5 hours of certification time, plus $500-$2000 in IT time for fighting the problem. Not to even factor in the costs of site downtime, 10 hours of stress, and all the energy put into fixing the stuff that broke when the site crashed.

If the best part of the Internet is that justice can be served, I would like nothing more than to see Dell be served justice by consumers that are tired of refurbished parts, support personnel bent on denying necessary parts, and a general lack of benefit of doubt to the customer. 5:30am is not a time where I belong.

Can’t open file in Rubymine?

Just wasted an hour fighting a bug in Rubymine 1.1.1 that wouldn’t let me open files. Every once in awhile, it just decides that it doesn’t want to open files of a certain type (in my case, all .html.erb and .rhtml files), regardless of how you try to open them (click in file browser, choose File -> Open, etc).

And now, ladies and gentlemen, without further ado, the fix:

File -> Invalidate caches
Close and re-open Rubymine

Sigh.

Ubuntu/Nautilus Script to Upload to Application Servers

Here’s a fun one. If you’re running Nautilus (which you probably are, if you’re running vanilla Ubuntu), this will allow you to copy one or more assets to one or more app servers with a right click.

Firstly, you need to open the script:

vi ~/.gnome2/nautilus-scripts/"Upload to app servers"

Then paste:


file_count=0
dir="`echo $NAUTILUS_SCRIPT_CURRENT_URI | sed 's/file:\/\///'`"
for arg
do
full_file_or_dir="$dir/$arg"
relative_file_or_dir="`echo "$dir/$arg" | sed 's/\/path\/to\/local\/root\/directory\///'`"
rsync $full_file_or_dir loginuser@appserver.com:/path/to/remote/root/directory/$relative_file_or_dir
# Add other app servers here if you want...
file_count=$((file_count+1))
done
zenity --info --text "Uploaded $file_count files to app servers"

After saving the above example, when you right clicked a file in Nautilus, you’ll see a “Scripts” menu option, which, when expanded, would have an option called “Upload to app servers”.

Upon clicking that script, the relative path of the file in your /path/to/local/root/directory will be copied to the corresponding location in /path/to/remote/root/directory on your appserver.com server. You can select multiple files and upload all at once. After uploading, a zenity message will pop up telling you how many files were successfully copied (requires the zenity package, apt-get install zenity).

So for example, if you right clicked the file /path/to/local/root/directory/myfile.txt, and picked “Upload to app servers,” the file would automagically be copied to appserver.com as /path/to/remote/root/directory/myfile.txt.

This script has proved very helpful for uploading post-deploy fixes to multiple app servers without redeploying the whole lot of them.