Savage Beast 2.3, a Rails Message Forum Plugin

Savage Beast 2.0 has been the de facto solution for those looking to add a message forum to their existing Rails site, but it was created more than a year ago, and had many aspects that tied it to Rails 2.0. Also, it relied on the Engines plugin, which is not the most lightweight plugin. Although Engines doesn’t seem to affect performance, it did rub some people the wrong way.

After a year’s worth of promises that an update was “coming soon,” an update has finally arrived and is now available at Github.

Detailed instructions on getting it rolling with Rails 2.3 follow.

Installation

Currently, the following is necessary to use the Savage Beast plugin:

  1. The Savage Beast 2.3 plugin. Go to your application’s root directory and:
    script/plugin install git://github.com/wbharding/savage-beast.git
  2. Most of the stuff you need to run Beast…
    • Redcloth: gem install Redcloth. Make sure you add “config.gem 'RedCloth'” inside your environment.rb, so that it gets included.
    • A bunch of plugins (white_list, white_list_formatted_content, acts_as_list, gibberish, will_paginate). If you’re using Rails 2.2 or earlier, you’ll need the Engines plugin, if you’re on Rails 2.3, you don’t need Engines. The easiest way to install these en masse is just to copy the contents of savage_beast/tested_plugins to your standard Rails plugin directory (/vendor/plugins). If you already have versions of these plugins, you can just choose not to overwrite those versions
  3. Go to your application’s root directory and run “rake savage_beast:bootstrap_db” to create the database tables used by Savage Beast. If it happens you already have tables in your project with the names Savage Beast wants to use, your tables won’t be overwritten (though obviously SB won’t work without its tables). To see the tables Savage Beast uses, look in lib/tasks/savage_beast.rake in your Savage Beast plugin folder.
  4. Next run “rake savage_beast:bootstrap_assets” to copy Savage Beast stylesheets and images to savage_beast asset subdirectories within your public directory.
  5. Implement in your User model the four methods in plugins/savage_beast/lib/savage_beast/user_init that are marked as "#implement in your user model
  6. Add the line “include SavageBeast::UserInit” to your User model. Location shouldn’t matter unless you intend to override it.
  7. Add the line “include SavageBeast::ApplicationHelper” to ApplicationHelper within your application_helper.rb file.
  8. Implement versions of the methods in SavageBeast::AuthenticationSystem (located in /plugins/savage_beast/lib) in your application controller if they aren’t already there (note: technically, I believe only “login_required” and “current_user” are necessary, the others give you more functionality). Helpful commenter Adam says that if you have the “helper :all” line in your application controller, be sure to add the “SavageBeast::AuthenticationSystem” line after that.

If you’re using Rails 2.0-2.2, and thus using the Engines plugin, you’ll need a couple extra steps:

  1. Add this line to the top of your environment.rb, right after the require of boot: require File.join(File.dirname(__FILE__), '../vendor/plugins/engines/boot')
  2. Move the routes.rb file from the “savage-beast/config” directory to the root (“savage-beast”) directory of the plugin. Then add the line “map.from_plugin :savage_beast” to your routes.rb. Location shouldn’t matter unless you intend to override it.

And off you go! When you visit your_site/forums something should happen. I’ve been creating new forums by visiting /forums/new. There’s probably a hidden admin view somewhere.

Implementing Your Own Views and Controllers

Just create a new file in your /controllers or /views directories with the same name as the file you want to override in Savage Beast. If you just want to override a particular method in a controller, you can do that piecemeal if you just leave your XController empty except for the method you wanted to override (Note: I know this piecemeal method adding works with the Engines plugin installed, but haven’t tested it without).

If you’re integrating this into an existing site, I’d recommend you start by creating a forums layout page (/app/views/layouts/forums.html.erb). This will give you a taste of how easy it is to selectively override files from the plugin.

Demo

You can check out a (slightly-but-not-too-modified) version of Savage Beast online at Bonanzle. The differences between our version and the version checked into Subversion are 1) addition of topic tagging (users can tag topics to get them removed, etc) 2) recent post list shows posts in unique topics, rather than showing posts from the same topic repeatedly (there’s another blog on here about the SQL I used to do that) and 3) skinning. None of those changes feel intrinsic to what SB is “supposed to do,” which is why they aren’t checked in.

Conclusion

Comments are most welcome. I’ll be checking in changes to the project as I find bugs and improvements in using it, but this is admittedly something I don’t have a lot of spare time to closely follow (see my other entries on the wonders of entrepreneurship). Hopefully others can contribute patches as they find time. If you like the plugin, feel free to stop by Agile Development and give it a rating so that others can find it in the future.

Rails Slave Database Plugin Comparison & Review

Introduction

Based on the skimpy amount of Google results I get when I look for queries relating to Rails slave database (and/or the best rails slave database plugin), I surmise that not many Rails apps grow to the point of needing slave databases. But we have. So I’ve been evaluating the various choices intermittently over the last week, and have arrived at the following understanding of the current slave DB ecosystem:

Masochism

Credibility: Was the first viable Rails DB plugin, used to rule the roost for Google search results. The first result for “rails slave database” still points to a Masochism-based approach.

Pros: Once-high usage means that it is the best documented of the Rails slave plugins. Seems pretty straightforward to initially setup.

Cons: The author himself has admitted (in comments) that the project has fallen into a bit of a state of disrepair, and apparently it doesn’t play nice with Rails 2.2 and higher. The github lists multiple monkey patches necessary to get it working. It only appears to work with one slave DB.

master_slave_adapter

Credibility: It’s currently the most watched slave plugin-related project I can find on github (with about 90 followers). Also got mentioned in Ruby Inside a couple months ago. Has been updated in last six months.

Pros: Doesn’t use as much monkey patching to reach its goals, therefore theoretically more stable than other solutions as time passes.

Cons: Appears to only handle a connection to one slave DB. I’m not sure how many sites grow to the point of needing a slave DB, but then expect to stop growing such that they won’t need multiple slave DBs in the future? Not us. There’s also less support here than the other choices for limited use of the slave DB. This one assumes that you’ll want to use the slave for all SELECTs in the entire app, unless you’ve specifically wrapped it in a block that tells it to use the master.

Db Charmer

Credibility: Used in production by Scribd.com, which has about 4m uniques. Development is ongoing. Builds on acts_as_readonlyable, which has been around quite awhile.

Pros: Seems to strike a nice balance between the multiple database capabilities of SDP and the lightweight implementation of MSA. Allows one or more slaves to be declare in a given model, or for a model to use a different database entirely (aka db sharding). Doesn’t require any proprietary database.yml changes. Didn’t immediately break anything when I installed it.

Cons: In first hour of usage, it doesn’t work. It seems to route most of its functionality through a method called #switch_connection_to, and that method doesn’t do anything (including raise an error) when I try to call it. It just uses our existing production database rather than a slave. The documentation for this plugin is currently bordering on “non-existent,” although that is not surprising given that the plugin was only released a couple months ago. Emailed the plugin’s author a week ago to try to get some more details about it and never heard back.

Seamless Database Pool

Credibility: Highest rated DB plugin on Agile Web Development plugin directory. Has been updated in last six months.

Pros: More advertised functionality than any other slave plugin, including failover (if one of your slaves stops working, this plugin will try to use other slaves or your master). Documentation is comparatively pretty good amongst the slave DB choices, with rdoc available. Supports multiple slave databases, even allowing weighting of the DBs. And with the exception of Thinking Sphinx, it has “just worked” since dropping it in.

Cons: Tried to index Thinking Sphinx and ran into difficulty since this plugin redefines the connection adapter used in database.yml*. The changes needed to database.yml (which are quite proprietary), make me suspicious that this may also conflict with New Relic (which detects DB plugin in a similar manner to TS). Would be nice if it provided a way to specify database on a per-model basis, like Db Magic. Also, would inspire more confidence if this had a Github project to gauge number of people using this.

Conclusion

Unfortunately, working with multiple slave databases in Rails seems to be one of the “wild west” areas of development. It’s not uninhabited, but there is no go-to solution that seems ready to drop in and work with Rails 2.2 and above. For those running Rails 2.2+ and looking to use multiple slaves, Db Magic and Seamless Database Pool are the two clear frontrunners. I like the simpler, model-driven style plus lack of database.yml weirdness of Db Magic. But I really like the extra functionality of SDP. At this point, our choice will probably boil down to which one gives us the least hassle to get working, and that appears to be SDP, which worked immediately except for Thinking Sphinx.

I’ll be sure to post updates as I get more familiar with these plugins. Especially if it looks like there is any intelligent life out there besides me that is attempting to get this working.

Update 10/13: The more I use SDP, the more I’m getting to like it. Though I was initially drawn to the Db Magic model-based approach to databases, I now think that the SDP action-based approach might make more sense. Rationale: Most of the time when we’re rendering a page, we’ll be using data from models that are deeply connected, i.e., a user has user_settings and extend_user_info models associated with it. We could end up in hot water if the user model used a slave, while the user_settings used the master and extended_user_info used a different slave, as would be possible with a model-based slave approach. SDP abstracts away this by ensuring that every SELECT statement in the action will automatically use the same slave database from within your slave pool.

Also, though I didn’t notice it documented at first, SDP is smart enough to know that even if you marked an action to read from the slave pool, if you happen to call an INSERT/UPDATE/DELETE within the action, it will still use the master.

* Thinking Sphinx will still start/stop with SDP, it just won’t index. Luckily for us, we are already indexing our TS files on a separate machine, so I’ll just setup the database.yml on the TS building machine to not use SDP, which ought to solve the problem for us. If you know of a way to get TS to index with SDP installed, please do post to the comments below.

jQuery XMLHttpRequest Object Documentation

You’d think that one of the most commonly used objects in jQuery would be well-documented and readily Google-able. But if it is, I’m searching for the wrong stuff. Below I’ve posted some screen captures of what lurks within XMLHttpRequest. I generated these by running “console.log(request)” on an XMLHttpRequest object named “request”.

Top level accessors and functions (click for full size):

xmlhttprequest

Most of the juicy stuff is inside .channel (again, click for full size):

xmlhttprequest_channel

Unfortunately, it seems that Firefox or jQuery considers it a breach of security to actually access any of this juicy info stored in .channel.

Buying Dell servers? Consider the hidden costs.

The time is currently 5:30am. Normally, this would be the sort of time that I would enjoy sleeping. However, I have instead spent this evening, or morning, or whatever-you- call-it, working with our hosting team to fix a server that refused to start after installing a replacement part given to us by Dell. We finally got the server running again about an hour ago, and I’m now waiting for our slave database (corrupted during the crash) to re-sync. If I’m lucky I’ll hit the sack before sunrise.

While it’s always difficult to conclusively assign blame with a hardware problem, I think it is pretty safe to blame this predicament on a replacement Dell part we installed about two days ago. Prior to installing the replacement, this server would crash every 2-4 weeks, stating that its disks had become detached. We had an identical server, with identical software, that has yet to crash since we purchased it, so a hardware failure was the most reasonable explanation. Google search results on our error corroborated this. So, hesitantly, I picked up the phone and dialed Dell support.

The worst-case scenario would be that they spend hours arguing with me and making me go through rote debugging tasks despite the numerous facts I’d accumulated that all pointed squarely at RAID controller. The best-case scenario was that they looked at my purchasing history of almost $20k in hardware over the last year, noted that this is the first time I have ever asked for a replacement part, and give me the benefit of the doubt just this once.

Dial Dell. Quick arrival at support guy. Explain situation to support guy. Support guy starts reading debugging script. D’oh! Worst-case scenario begun.

As any business owner/IT manager can tell you, there is a very tangible time/cost equation that can be applied to any hardware debugging scenario. The question that one has to ask oneself, when entering into a support debugging script they know is unnecessary, is whether the 2-3 hours it will take to complete the tasks, multiplied by the uncertainty that the support person will agree a replacement is necessary, is less than the cost of just re-buying the part. When my hard drive failed on my Dell desktop, the answer quickly became “no” after I spent an hour on the phone with the tech over a hard drive that was probably worth $100 (I ordered the part off Amazon and my computer has worked fine since). It was my cautious hope that my buying history might earn me a more dignified treatment this time around. But after an hour on the phone with the tech, it quickly became clear that I was yet again headed down a multiple-hours debugging path for a part that would cost only $200-$300.

Then, a break of light. I learned of the Dell FastTrack program, which, after a test “certifies” that a person isn’t a dunce, allows them to order their own parts without the support script. A great solution to a hard problem. I quickly signed up for the program. Though it took me probably about 5 hours to complete it, I justified the time expense as a pre-payment on a lifetime of savings in support scripts.

Shortly thereafter, I became certified, ordered my replacement part, and it arrived only two days later. Great turnaround!

Then, today happened. Here is my rough table of the different paths I could have taken to get our server problem fixed:

Option Initial Cost Future cost Total cost
Buy a new RAID card. $250 None. Because new parts work. $250
Persuade Dell support person that you deserve a replacement part 2-3 hours of time 5-10 hours of IT debugging time * $100-$200 (after refurbished part fails) 2-3 hours of life + $500-$2000
Get Dell certified, order replacement part 5 hours of time 5-10 hours IT debugging time * $100-$200 hour (after the refurbished part fails) 5 hours of life + $500-$2000

The most maddening part of this story? That it makes perfect economic sense for Dell. Other than customer fury (which generally has no tangible cost), what does it matter to them if the replacement part doesn’t work? Why not just ship every returned part out to another customer, just to be sure that it is “really” defective? They kill two birds with one stone: they don’t have to spend money on a new replacement part, or in the worst-case scenario, they get a free test of their hardware from the unwitting consumer. Heck, maybe they send the replacement part, which fails, and the consumer gets so frustrated they start buying new parts instead of bothering their support team. They save time and make money.

Unfortunately, today I was the unwitting consumer. Rather than spending the $250 to buy a new part, I believed that Dell would send me a replacement that worked. Instead, it failed catastrophically two days after we installed it, and I was out 5 hours of certification time, plus $500-$2000 in IT time for fighting the problem. Not to even factor in the costs of site downtime, 10 hours of stress, and all the energy put into fixing the stuff that broke when the site crashed.

If the best part of the Internet is that justice can be served, I would like nothing more than to see Dell be served justice by consumers that are tired of refurbished parts, support personnel bent on denying necessary parts, and a general lack of benefit of doubt to the customer. 5:30am is not a time where I belong.

Can’t open file in Rubymine?

Just wasted an hour fighting a bug in Rubymine 1.1.1 that wouldn’t let me open files. Every once in awhile, it just decides that it doesn’t want to open files of a certain type (in my case, all .html.erb and .rhtml files), regardless of how you try to open them (click in file browser, choose File -> Open, etc).

And now, ladies and gentlemen, without further ado, the fix:

File -> Invalidate caches
Close and re-open Rubymine

Sigh.

Ubuntu/Nautilus Script to Upload to Application Servers

Here’s a fun one. If you’re running Nautilus (which you probably are, if you’re running vanilla Ubuntu), this will allow you to copy one or more assets to one or more app servers with a right click.

Firstly, you need to open the script:

vi ~/.gnome2/nautilus-scripts/"Upload to app servers"

Then paste:


file_count=0
dir="`echo $NAUTILUS_SCRIPT_CURRENT_URI | sed 's/file:\/\///'`"
for arg
do
full_file_or_dir="$dir/$arg"
relative_file_or_dir="`echo "$dir/$arg" | sed 's/\/path\/to\/local\/root\/directory\///'`"
rsync $full_file_or_dir loginuser@appserver.com:/path/to/remote/root/directory/$relative_file_or_dir
# Add other app servers here if you want...
file_count=$((file_count+1))
done
zenity --info --text "Uploaded $file_count files to app servers"

After saving the above example, when you right clicked a file in Nautilus, you’ll see a “Scripts” menu option, which, when expanded, would have an option called “Upload to app servers”.

Upon clicking that script, the relative path of the file in your /path/to/local/root/directory will be copied to the corresponding location in /path/to/remote/root/directory on your appserver.com server. You can select multiple files and upload all at once. After uploading, a zenity message will pop up telling you how many files were successfully copied (requires the zenity package, apt-get install zenity).

So for example, if you right clicked the file /path/to/local/root/directory/myfile.txt, and picked “Upload to app servers,” the file would automagically be copied to appserver.com as /path/to/remote/root/directory/myfile.txt.

This script has proved very helpful for uploading post-deploy fixes to multiple app servers without redeploying the whole lot of them.

Bonanzle: A Year One Retrospective, Part I

I just realized that Bonanzle’s one year anniversary (since beta launch) passed by a month ago without me taking a minute to chronicle the madness of it all. But I’ve never been much of one for hazy-eyed generalities (e.g., “starting a business is hard work,” “business plans are useless,” etc.), which is most likely what this blog would become if written in free form.

I much prefer hard facts and examples, the sort of thing that gets drawn out through the efforts of a skilled interviewer. Unfortunately, since Brian Williams isn’t returning my calls, we’re going to have to settle for a purely fictional interviewer, who I will name Patsy Brown, on account of the fact that she’s a brownnosing patsy that tosses me softballs to make me sound as smart as possible. Thanks a bunch, Patsy. Mind if I call you Pat? Oh, you’d love it if I called you that? Well, great! Let’s great started.

Pat: So, Bill, it has been an incredible year one for Bonanzle. It’s really something how, with no investment money, this creation has garnered more traction than sites with far more money and far more experience. Even more improbably, it’s been done with a team of two full timers and an ever-changing legion of contractors when comparable sites field teams of 10+ employees. How has this first year compared to your expectations?

Bill: Well thank you, Pat. That’s really nice of you to say. It has been pretty wild, hasn’t it? To be honest, there were very few moments in the early evolution of Bonanzle where I thought I knew what would happen next in terms of traffic, sales, or revenues (early interview tangent: I firmly believe the founders of Twitter had no idea what they were building at first either. I still remember in 2007 when they were advertising it as a means to let your friends know what you were doing on a minute-by-minute basis, which was a pretty dumb premise for a business, if you asked me or the other 3 people that were using it at the time. I am already dreading the inevitable declarations of genius that revisionist historians will bestow upon those founders in the months to come. Anyway, we now return to your regularly scheduled interview. And yes, I have no business interrupting this interview before the first paragraph has been finished). I mean, we spent more than a year building Bonanzle as a site that would compete with Craigslist, and it wasn’t until a couple months after we launched (in beta) that we realized that there was simply no market for a better Craigslist.

Once we figured that out and re-geared ourselves as a utopia for small sellers, the first few months were pretty unreal — growing 10-15% larger with every passing week. That was incredibly tough to manage, because at the time we’re increasing the load on our servers by a factor of about 2x-3x monthly, and I was still only learning how to program web sites. If I had set expectations, these early months would have certainly blown them out of the water.

Pat: Can you give us a story to get a sense of just how hectic those early months were?

Bill: Sure, Pat. One memorable Bonanzle moment for me was a week back in October, when I was housesitting for some friends. This was at a time when our traffic was starting to push into the hundreds of thousands of unique users, and our servers were in what I think could best be termed “perpetual meltdown mode.” I remember one particular night where I was up until 4 AM, fiendishly working on some improvements to our search so that it wouldn’t crash. The Olympics were on TV in the room, and I felt like I had an intimate bond with the athletes — I mean, it takes a certain type of insanity to workout for thousands of hours to become the best athlete in the world, and it takes a similar type of insanity to lock oneself up in a room for 12-14 hours per day and try to scale a site up to hundreds of thousands of visitors with no prior experience. Generally, a team of many experienced programmers is going to be required for that amount of traffic, but, being on the brink of going broke, that wasn’t an option. So I pried my eyelids open until I finished the search upgrades, and wearily made my way back to bed to get up early and repeat.

It turned out that that “getting up early” in this case was about three hours after I went to sleep, when I received a then-common automated phone call from our server monitoring center that Bonanzle was down. I dragged myself out of bed, slogged down to the hot computer room, and spent another couple hours figuring out what had gone wrong. When it was fixed, I turned the Olympics back on and basked in our shared absurdity at 8 AM that morning.

Pat: What were some of the key lessons you learned during those months?

Bill: Well, other than technical lessons, the most salient lesson was that, when you find a way to solve a legitimate pain, amazing things can happen. In our case, by building a marketplace as easy as Craigslist with a featureset that rivaled eBay’s, we had what seemed like every seller online telling us how relieved & empowered they felt to have discovered us. Then they told their friends via blogs and forums. It was heady stuff. Our item count rocketed in a way that we were told had never been seen amongst online marketplace not named “eBay,” as we shot from 0 listings to one million within our first few months.

Pat: But with great success comes great responsibility, right? Tell me about how you dealt with managing the community you suddenly found at your (virtual) doorstep.

Bill: That was a real challenge, but something that was really important to us to get right. I have frequently said that, being a Northwest company, one of my foremost goals is for us to live up to the legacy of the great customer-centric companies that have come from this region, like Costco, Amazon, and Nordstrom. Customer service is a company’s best opportunity to get to know its users and develop meaningful trust. So as we started to appreciate the amount of time and effort that’s required to keep thousands of users satisfied, we knew that it was going to become a full time job, so that’s when Mark was anointed the full time Community Overseer.

Pat: Tell me about your relationship with Mark and what he has been to Bonanzle.

The pairing of Mark and I is the sort of thing you read about in “How to Build a Business”-books. Our personalities are in many ways diametrically opposed, but in a perfectly compatible way for a business that requires multiple unrelated skillsets. Mark is patient, I am impatient. Mark is happy dealing with people for hours on end, I am happy dealing with computers for hours on end. Mark is content to be on an amazing ride, I am neverever satisfied with what we have and constantly looking forward.

Fortunately for us, there are also a few qualities that we have in common. We are both OK with constant chaos around us, which is assured in any startup (though I’d say that goes doubly for any community-driven startup). We both enjoy what we do, so we don’t mind 10-12 hour days 6 days per week (right, Mark? :)). And I think we both are generally pretty good at seeing things in other people’s shoes, painful though that sometimes can be when we can’t get something exactly the way we want it due to resource constraints.

Pat: I hear “community” as a common theme amongst your answers. Tell me about the makeup of the Bonanzle community and what their role has been in the building of Bonanzle.

Bill: Well I think it’s pretty obvious within a click or two of visiting the site that Bonanzle is the community. Almost all of the key features that differentiate us from other marketplaces revolve around letting the many talents of our community shine through. It starts from the home page, which is comprised of catchy groups of items curated by our numerous users with an eye for art/uniformity. Real time interaction, another Bonanzle cornerstone, relies on our sellers’ communication talents. And the traffic growth of the site has been largely driven by the efforts of our sellers to find innovative ways to get people engaged with Bonanzle: from writing to editors about us (it was actually the efforts of a single user that got us a feature story in Business Week), to organizing numerous on-site sales (Christmas in July, etc.) that drive buyers from far and wide.

I think that, from a management standpoint, it’s our responsibility to strive to keep out of the way of our sellers. In so doing, the embarrassment of riches we have in member talent can continue to build Bonanzle in ways that we’d have never even considered.

Pat: If you’re just joining us, I’m talking with Bill Harding, Founder of Bonanzle.com, about the experience of his first year of running Bonanzle. Please stay tuned — when we return, I’m going to talk to Bill about what he sees in today’s Bonanzle, and what he predicts for the future of Bonanzle. With any luck, I’ll even get him to answer the eternal question of which is tastier between pizza, nachos, and pie. But for now, a word from our sponsor:

Nice find, Arlene!

Determine location of ruby gems

Not hard:

gem environment

You can even get free bonus information about where ruby and rubygems are… but only if you order now. For a limited time, we’ll even throw in the command that got us this gem of an answer: “gem help commands”

Copy aptitude packages between Linux (Ubuntu) Computers/Systems/Installations

Surprising this isn’t better Google-documented. Here’s how I do it.

Make a list of your existing gems in a text file (run from machine with gems already installed):

sudo dpkg --get-selections | awk '{print $1}' > installedpackages

Copy the file to your new system. Then run:

cat installedpackages | xargs sudo aptitude install -y

It’s run through each line of your gemlist file and run “sudo aptitude install -y” on it.