Oh Woe are We: Desecrating Rails for a Brighter Tomorrow

Rails programmer, how long do you think these glory days will last?

I first started thinking about this while attending Railsconf and talking to programmers who were creating Rails projects within gorillas such as AOL, LinkedIn, Monster, and of course Yellowpages (my apologies to AOL, LinkedIn, and Monster if any of those projects were supposed to be secret). After watching DHH’s keynote, it sunk in further. Now I hear that EngineYard just got $15m more funding, and the picture is crystal clear: the hobgoblins from years’ past that have kept Rails away from corporate use are melting away faster than polar ice cap.

This is a bad thing for those that like the competitive advantage Rails affords. With the advantage of RoR (and two tablespoons of Firebug), right now is one of those rare times in which a motivated programmer or two can compete technologically with legacy web companies that have payrolls 100x greater.

If Bonanzle could dig up an ounce of business savvy (still TBD) to go with our technology, I like to think we could be one of the seminal examples of this. Look down the list of our technological advantages vs the gorillas (real time offer making, integration with all major IM clients, image cropper, price guessers, pretty urls, 0-page-load item creation, real time messaging built into site, item importers for eBay/Craigslist, etc. etc.), and it’s clear that we’re living in an age where the possibilities for the underdog are as rife as ever.

But how long will that be the case? With a test-driven development environment like Rails, where metaprogramming and a solid MVC architecture can cut 125,000 lines of code to 20,000 lines (like Yellowpages did in moving to Rails), suddenly gorillas are nimbler, and order is restored to the world. Eventually, the gorillas might just figure this out. The horror.

In light of these facts, I have taken to pumping up the FUD on Rails whenever possible. I like my unfair Rails advantage, and would miss it if it were gone. In the interests of helping the reader perpetuate Rails FUD amongst their own social circle, I have created the following collection of sound bites to help you desecrate Rails:

  1. Rails Doesn’t Scale. C’mon, you know it’s true. I even preached this one myself. Tell them that “Ruby is the slowest programming language” ever. I mean, it’s slower than an excellent language like Python by a factor of 3x. It’s slower than C by about 70x! I can only imagine how much slower it is than x80 assembly. Visualize how screaming fast your assembly-driven site could be. If your listener happens to mention cloud computing, I recommend you retort with the thousands of dollars they’d have to spend on a server cloud for a heavily trafficked site. If they point out that those costs are a fraction of the development costs they’d incur otherwise, move on to point number two.
  2. No Big Sites Use Rails. Yes… yes… yes! In Rails’ history, excluding a couple one shot wonders like YellowPages and Twitter (don’t forget: Twitter is rumored to be leaving Rails), no “big” web site is running on Rails. Point out that both Facebook and Flickr run on PHP. Or better still, perhaps your listener would like to literally be the next MySpace and build on .NET? Hopefully they do not retort that almost none of the biggest sites were built when Rails existed, or that most new sites are built on RoR, or that any kind of bad Rails juju they encountered could be monkey patched since it’s all open source. It should suffice to say that if no big site has been built on Rails by now, no big site ever will be. Still not dissuaded?
  3. Google uses Python. Yes, they also use Ruby. But they use Python more. This one ought to work well when convincing a take-over-the-world CEO.
  4. Other Languages have More Libraries. This works sort of like number two… point out how many libraries PHP already has. Or .Net. Or Java, or C++. You can manipulate the hell out of an image with any of these choices. Or if you decided at the last minute that you wanted to make your web site into a PS3 game, you’d be glad you chose C++ and its PS3 libraries. Only the savviest enterprise-y tech guy will be able to discern between quantity and quality: think how many underdeveloped, crap libraries they’ve become accustomed to while dealing with Java.
  5. Open Source Software is Unsupported Software or Ruby on Rails Doesn’t Deploy on Windows or URLs Don’t Matter. Alright, if they still weren’t convinced after four points that terrific, you’re going to need to fire some scattershot. These arguments should close the case for the last few Windows types, and for others, hopefully they tilt in favor of Anything But RoR. But for the last few hard-to-budge listeners, you’ll need to pull out your ace in the hole…
  6. The Creator is an Ass. Have you ever listened to DHH talk? What an elitist snob. Thinks he is God’s gift to programming. The way he repeatedly cut off this nice Python fellow was disrespectful. Any framework created by this guy must be an elitist, disrespectful framework to use.

And that should about do it… double bonus points for anyone who can convince eBay to give ASP .NET a whirl!

(Note to busy/dense readers: #1-6 employ heavy use of irony)

Starling Update from the Horses’ Mouth

I was able to chat with Blaine at Railsconf for awhile about the status of Starling. Super nice fellow, and very illuminating explanations to my main questions about Starling. Here’s what I learned:

get() removes entries from the queue by design. Thus, Starling isn’t ideally suited for tasks where one would query the task for status. However, Blaine mentioned a clever trick he’s used in a few cases to be able to figure out when a Starling task is done: after adding a Starling task to the queue, the caller can check the queue for a given key that the Starling task will add to the queue when it’s done. For example, say you enqueue a task to grab a user’s contacts from Gmail, and your rake task (or however you’re processing the message) will set a key called “user100_contacts_grabbed”. You can then have your caller periodically look for whether that key is in the queue. Once it is, you know the job is done.

Also “by design” is that queues are not immediately deleted from disk, even after they’ve been removed from the queue via get(). The idea here is that to actually find the key you got and remove it from disk would be expensive. The way Starling is setup, those entries on disk are just logfiles that get concatenated whenever you get or set from a queue. If Starling gets shut down for whatever reason, upon restarting, it will read through the entire log file to determine what keys have yet to be processed, and will start with those keys still in your queue. Pretty clever, and very fast. Also, the log files are automatically trimmed after they reach a certain size, so you don’t have to worry about them growing to infinity (don’t remember what he said the size was, but it seemed fine).

Blaine mentioned that a new release of Starling would probably be out soon, and despite the relative lack of noise about it lately, it will remain an active product. Documentation for it remains an important goal. This is all promising news for a product with such great potential.

What else did he say…that Starling is all about speed, and the average get/set is something like a millisecond or less…that if your task fails and you want to try it again you need to remember to requeue it…I feel like I might have learned some other things as well, I’ll update this post later if/when I remember them. As always, feel free to post any specific questions here.

Guide to Setup Rails with MySQL SSL

UPDATE 5/24/08: There was a recent APB security bulletin for all those running Debian-based OSes (including Debian) with OpenSSL 0.9.8c (released 2006) onward. You can read about it here. Long story short: if you’re running a flavor of Debian, you should run “sudo apt-get update” and “sudo apt-get upgrade openssl” before you start these instructions, to ensure that you’re using the patched version of SSL. We now resume our regularly scheduled programming…

If you have a database in one place and some Rails stuff in another place (be it your Rails app, or an asynchronous module that interacts with the DB), and if you’re running on a hosted server (i.e., you can’t just setup a hardware firewall for your entire server network), chances are you have thought or should be thinking about setting up your database to accept SSL connections. This ensures that malicious third parties can’t read the network packets being transmitted between your remote server and your database. Here are some of the sites and notes that I used to get us setup doing this:

For detailed but not too-detailed instructions on generating the SSL keys with MySQL, the MySQL documentation on SSL is great. This documentation describes not only how to generate your SSL keys, but also how to tell your DB who to accept connections from, and whether SSL is required when interacting with those remote IPs.

Important Note 1: to get the shell script on the MySql page working, you’ll need to change the directory in the script so that your openssl.cnf file is at /etc/ssl/openssl.cnf (if you’re running Gutsy).

Important Note 2: The “common name” field in your client and server keys must be different, or your key generation will fail. You’ll know you got it wrong if you run the shell script and it doesn’t ask you if you want to sign the certificate.

A couple other points not specifically called out in the MySQL documentation:

1) to tell MySQL to load with your server certificates, “sudo vi /etc/mysql/my.cnf”. There are a couple lines near the bottom of the file (under the mysqld section) that you can uncomment and change to point at location where your certificates reside.

2) To stop and restart your MySQL server on Ubuntu (necessary so the my.cnf file is reloaded), run “/etc/init.d/mysql stop” or “/etc/init.d/mysql start”. Yes, it won’t work unless you include the path (at least, it wouldn’t for me, and the other Google results I found).

After you think your SSL certificates are legit, you can double check your work by following the instructions here. The link has the commands for verifying that your server and client certificates are setup correctly.

Next, when you have the server certificates setup, you’ve granted access to your remote box (as specified in the MySQL instructions) and you’ve copied the client certificates to your remote box, I recommend testing your mysql connection from the command line on the remote box to verify that everything is kosher. Something like this should do the trick:

mysql -u [username] -p[password] -h [mysql box address] --ssl-capath=[path to client certificates] --ssl-ca=[path to ca-cert]/ca-cert.pem --ssl-cert=[path to client cert]/client-cert.pem --ssl-key=[path to client-key.pem]/client-key.pem

This assumes that you’ve retained the default filenames (ca-cert.pem, client-cert.pem, client-key.pem) mentioned in the MySQL documentation.

If all is well, you should get connected to your MySQL server.

Next up is to setup your database.yml so that it can do this stuff for you. I was somewhat surprised to find that database.yml actually already has (almost completely undocumented) options for SSL security. Do a find all in the Rails source for “:sslkey” and you’ll find all of the options that you can pass to database.yml. Make the database.yml options point to the correct address/filesnames, and you have yourself a more secure connection between remote boxes and your DB.

If you don’t feel like doing the find all, here are the relevant SSL options to put in your database.yml:

sslkey: /path/to/client-key.pem
sslcert: /path/to/client-cert.pem
sslca: /path/to/ca-cert.pem
sslcapath: /path/to/certificates

Rails Starling Setup, Options, and Usage Documentation

After spending too many frustrating hours and days fighting BackgrounDRb to connect to multiple servers, I decided to try option #2, the Starling/Workling combo, as of yesterday evening. So far, I’ve been really pleased with most of the setup. I had it working preliminarily in about an hour (compare to about 2 full days for one of our developers to get BackgroundRb setup) locally. However, if there is one thing that’d made Starling difficult so far, though, it’s the lack of useful results I get when I Google anything I can think to Google about Starling documentation. Thus, I am going to try to keep updating this blog as I get Starling tweaked and working, with all of the information I can gather to try to make future Starlingers have a bit easier proposition.

Installation

On your Starling server(s), run “sudo gem install starling” to install it.

On your clients, you talk with Starling via memcached, so you’ll need the memcached client installed. “sudo gem install memcache-client” should do the trick.

Basic Usage Example

I found this morsel in the readme.txt for the Starling gem:

# Start the Starling server as a daemonized process:
# Note by Bill -- I believe that starling will only run as root currently
sudo starling -h 192.168.1.1 -d

# Put messages onto a queue:
require 'memcache'
starling = MemCache.new('192.168.1.1:22122')
starling.set('my_queue', 12345)

# Get messages from the queue:
require 'memcache'
starling = MemCache.new('192.168.1.1:22122')
loop { puts starling.get('my_queue') }

The first line starts a starling that listens on 192.168.1.1, i.e., on your local network. The next lines actually connect to that Starling and test it. They can be run from within irb to verify that your Starling setup is working as expected.

Other options

In general, you can see memcached methods to figure out what Starling is capable of (well documented here). The key difference between Starling and Memcached, from my testing, is that Starling’s get() returns the hash value and deletes it from the cache. Memcached’s get() just returns the value, which is still in the cache until you call delete().

Otherwise, Starling is pretty similar to Memcached. For example (from Starling tests):

starling = MemCache.new('192.168.1.1:22122')
starling.set('test_set_with_expiry', 5 + 2, now)
starling.set('test_set_with_expiry', 5)
sleep(now + 1 - Time.now.to_f)
starling.get('test_set_with_expiry') # returns 5

Like memcached, if you want to get stats on your Starling, such as number of items in the queue, bytes used, starling version, log size, cache hits/misses, total connections, stuff that’s currently cached, and more, you can run stats() on your starling connection object, like so:

starling = MemCache.new('192.168.1.1:22122')
starling.stats # returns hash of statistics on the starling connection

Bugs and Caveats

When you’re setting your Starling up on a remote server, you’ll want to remember to change the address you bind to when you change Starling, e.g., “sudo starling -h my_ip_address -d”. If you’re not root, you’ll probably need the sudo, since Starling binds to network ports.

A couple more apparent bugs I’ve experienced with Starling (running gem version 0.9.3 on Gutsy):

  • Even if I specify a log file on the command line, no log file is generated
  • Though when you get() a message it is removed from the queue, it still exists in the /var/spool/starling directory, from what I can tell, indefinitely – Blaine explains this in my Starling update

If anyone who has been a longer-time Starling user has seen and gotten past these issues, I’d sure like to hear them.

Update

I was able to talk with Blaine at Railsconf about some of my specific questions about Starling, and about the future of Starling in general. Read about it here.

Ultimate Guide to Setup SSL on Rails and Apache 2 (with Ubuntu seasoning)

UPDATE 5/24/08: There was a recent APB security bulletin for all those running Debian-based OSes (including Debian) with OpenSSL 0.9.8c (released 2006) onward. You can read about it here. Long story short: if you’re running a flavor of Debian, you should run “sudo apt-get update” before you start these instructions, to ensure that you’re using the patched version of SSL. And now back to your regularly scheduled programming

For something that a million billion web sites have to do, the state of documentation on how to get your Rails app running with SSL in a standard Apache configuration, is.. how shall we say it.. ass. Even the previously lauded “Advanced Rails Recipes” only scratches the surface on what it really takes to get a Rails app running with SSL. This post will aim to change that with a (relatively) comprehensive, front-to-back guide of what I learned in setting up our app with SSL over the last couple days. Maybe it isn’t really the “ultimate” guide, per se, but let’s say we’re speaking relatively here.

Step 1: Understand as little as possible about what SSL is and how it works.

SSL is the technology that’s working when your browser visits an https:// address. It is typically used by production websites for logging in, updating user data, and of course, submitting top secret data. The point of SSL is to encrypt sensitive user information so that it isn’t sent across the web in plain text form, where it could be read and used maliciously by evil pimpled pirates.

To setup an SSL connection, you need to create a key and certificate. You then need to get that certificate endorsed by a “certificate authority” (henceforth, a CArtel… more on the name in a minute). For testing purposes, that CArtel can be your website, but if you endorse your own certificate, visiting users will be asked if they want to trust your site as a certificate authority before proceeding to your https:// pages. So real production sites rely on the syndicate CArtels. The biggest CArtels include Verisign, Geotrust, Thawte, and GoDaddy. Generally speaking, the bigger the CArtel, the more they charge you for the same service. Like the Corleone family. In the process of setting up your SSL, you will quickly learn to despise all of the above CArtels for a variety of reasons, most obviously because they charge ridiculous amounts ($20-$500 yearly for the most basic certificate, $1500+ for an EV one) for a trivial service, and despite that exorbitant cost, all three that I tried were broken to varying degrees and a pain in the ass.

Where was I? Oh yes, SSL. An ultimate guide to it.

Step 2: Generating your SSL key and Certificate Signing Request

Before you can enjoy engaging with your CArtel of choice, you must generate an SSL key and a Certificate Signing Request (CSR). The SSL key is used 1) to generate your CSR and 2) by Apache, to compare against the certificate given to you by your CArtel. All the CSR is is a plain text file that has a big hex string you’ll submit to the CArtel when you’re ready.

The process of creating the key and subsequent CSR varies per operating system. My OS is Gutsy Ubuntu, so if you’re running that (or later versions), you’re in luck, cuz I’ve got some explicit instructions ready for you without leaving the comfort of this blog. If you’re not, these instructions may still work with other versions of Linux, but I don’t know what you do in Windows (seriously, Windows? We’re talking about a Rails stack, aren’t we?).

1. Generate a key
In any directory, run “sudo openssl genrsa -des3 -out your_website_name.key 1024 or “sudo openssl genrsa -out your_website_name.key 1024″

The difference between the first and second command: the first one generates your key in such a way that it will require a password to be entered every time it is read (= every time Apache starts). It is possible to run a program to output this passphrase (Google “SSLPassPhraseDialog“), so you can still start Apache automatically on system reboot, but various websites I read suggested that using one of these programs is generally no more secure than just generating a key without a passphrase (which is the second option). Which you choose depends on whether you can have someone around your server whenever it needs rebooting, and how paranoid you are that someone will be able to log onto your box and steal your key (meaning that their porn site is now indistinguishable from your web site (at least from the standpoint of the the CArtel)).

If you’re running a version of Ubuntu that doesn’t already have SSL installed, you should just be able to install it with “sudo apt-get install openssl”

2. Create a CSR
In the directory that you created your key, run: openssl req -new -key your_website_name.key -out your_website_name.csr

And that’s that.

Step 3: Pick a CArtel, any CArtel! They got stuff you need, and they know it.

Here’s where you get to decide if you want to pay $500 for a mediocre service (Verisign), $20 for a pitiful one (GoDaddy), or something in between for a broken one (GeoTrust, $250). I tried all three of those. My brief summary is thus: Verisign is the biggest name in CArtels, and they charge like it. You can get the exact same certificate VeriSign charges $500 for at GoDaddy for $20. I started setting up a trial certificate at Verisign, but gave up when I realized that they were going to remind every user on my site that our certificate was a trial one until we forked over the $500. I suppose that maybe $500 is worth it for some sites, since you get to put their flashy Verisign logo at the bottom of your SSL pages, but I couldn’t bring myself to subsidize the pork. It feels too much like the Title Insurance company that charges me $1000 when I buy a house for them to run a query in their database.

I actually signed up for a GoDaddy certificate. Their interface is esoteric, clearly designed by an unfed engineer who needed to get home in time for his WoW mission. But, after signing up, then waiting about a day while they verified something-or-other, the GoDaddy certificate worked. It did require a small extra step of download an “intermediate certificate” in addition to my signed certificate, but for the price, it was alright.

GeoTrust simply annoys me. I’ve gone through their signup process twice, and been rejected by their phone verification system as many times. Luckily, that verification system seems to be “just for fun,” because, despite failing that mandatory step, they still mailed me a certificate. Their UI is similarly awful to GoDaddy, and their site appears to be unmanned (clicking on the “login with certificate” button gives a fatal certificate error). But they work, and their logo is fairly well recognized.

I didn’t spend too much time on Thawte or Comodo, but I can’t imagine they’d be much worse than the above.

The ultimate reward for powering through the CArtel process is that they mail you a long string that you paste into a .csr file. In our case, I pasted the certificate given to me by our CArtel into a file I created and named “bonanzle.csr”.

Step 4: Copy the key and CSR files + setup your Apache config file (aka: the hard part)

In Gutsy with Apache2, the recommendation given by this helpful page is to copy your files as follows:

sudo cp your_server_name.crt /etc/ssl/certs

sudo cp your_server_name.key /etc/ssl/private

Where “your_server_name.crt = the file you created in the last step” and “your_server_name.key = the file you created in step 2.1”.

Next, you’ll need to make sure you have the Apache “SSL” mod enabled (for SSL, duh), and the “headers” mod enabled (for Rails to recognize the SSL). In Apache2, this is accomplished by running “sudo a2enmod ssl” and “sudo a2enmod headers”. I did in the Apache directory (/etc/apache2), but I’m not sure if that mattered.

Finally, you setup your Apache config file. I think this could probably be put equally well in either /etc/apache2/apache2.conf, /etc/apache2/httpd.conf, or /etc/apache2/sites-available/default. All of those files are run on Apache startup. I chose the last of those, because it was recommended by the Ubuntu Gutsy SSL setup page (probably just for separation sake). Here was what I added to my /etc/apache2/sites-available/default file:

<VirtualHost *:443>

# SSL requests should proxy just like normal ones... this is the same
# code I use in my "VirtualHost *:80" block to forward http requests
# to my mongrel cluster... If you have different proxying code,
# you'd paste that here.

RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f
RewriteCond %{SCRIPT_FILENAME} !maintenance.html
RewriteRule ^.*$ /system/maintenance.html [L]
RewriteRule ^/$ /index.html [QSA]
RewriteRule ^([^.]+)$ $1.html [QSA]


RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule ^/(.*)$ balancer://railsapp%{REQUEST_URI} [P,QSA,L]

# The actual SSL stuff, make sure the engine is on, enable some options that the Gutsy page said I should,

# and tell Apache where my key+CSR files are
SSLEngine on
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
SSLCertificateFile /etc/ssl/certs/bonanzle.crt
SSLCertificateKeyFile /etc/ssl/private/bonanzle.key

# Used by Rails. Mentioned in all the Rails SSL tutorials.
RequestHeader set X_FORWARDED_PROTO "https"
</VirtualHost>

Depending on your CArtel, you may also need to ensure that your ServerName and ServerAlias match the “common name” field that you specified. I know I had to do this. This was accomplished by adding the following lines inside my <VirtualHost *:80> block (I did it at the top, but that almost certainly doesn’t matter):

ServerName www.bonanzle.com
ServerAlias www.bonanzle.com

After that, save the file, reload Apache and cross your fingers. If you Apache restarts sucessfully, you have completed the most difficult part of the process! If not, visit /var/log/apache2/error.log and get to Googling.

Step 5: Setup Rails with SSL (aka the easy part)

Here’s the part you can find in many existing Rails tutorials that copy/paste the README in the ssl_requirement plugin. Firstly, you install the ssl_requirement plugin (“ruby script/plugin install ssl_requirement”). Then, add near the top of your application controller the line “include SslRequirement”. Then add the following block to your application controller, so your development server will continue working:

def ssl_required?
return false if local_request? || RAILS_ENV == 'test'
super
end

Then, any action that should go through SSL just needs to have a line added in its controller, “ssl_required :action_name”. You can also do “ssl_allowed :action_name” if the action can be used either way.

You should probably also know…
Most CArtels will only validate one domain (validating two domains would mean an extra query for their database) for your X hundred dollars, so you need to make sure that all actions that will be SSL can only go through one address. Specifically, you need to make sure that any possible subdomains will map to one address if you don’t want to buy multiple certificates. You’re probably already doing this if you have a production site, because if you allowed stunts like “bonanzle.com” and “www.bonanzle.com”, then users sessions could also be messed up. However, if you haven’t yet setup a redirect from subdomainless URLs to www.* URLs, here’s the code I used to do it:

# Redirect all bonanzle.com/ request sto www.bonanzle.com
Options +FollowSymLinks
RewriteEngine on
RewriteCond %{HTTP_HOST} ^bonanzle\.com
RewriteRule ^(.*)$ http://www.bonanzle.com$1 [R=301,L]

Conclusion
Hopefully this ultimate guide will save you minutes or hours relative to Googling a bajillion sites to put all this data together. Leave a comment if you’ve gone through this and found it to work or not work. I’d also be curious if any else has experienced a CArtel that they would recommend (e.g., one where a UI-minded person or SSL novice go through signup process and not gasp in horror). Thanks for any feedback.

Quick Book Plug: Advanced Rails Recipes

By far the best book I’ve gotten to help with all the not-well-documented aspects of getting a site delpoyed. Beginner Rails help articles are a dime a dozen, but this book is chalk full of example-laden recipes for deployment. You are a fool if you deploy Rails apps and you don’t own this book.

Rails Internet Explorer Integration Guide

After about nine months of blissful Firefox-only development, Bonanzle finally started down the long road to Internet Explorer-compatibility a couple months ago. Though I’ve met few web developers who like the process of supporting IE, browser statistics show that fully 50% of users are still on some version of it (IE 6 has about 30%, IE 7 around 25%), so it’s something we have to deal with. Having just about wrapped up our backporting, I thought I’d share a few observations and tips on the process.

First of all, for background, I had never really touched web development of any sort until about 9 months ago. At the beginning of the backport, we had nary opened IE to see how our site would fare in it. As you might guess, the answer was “not well.” Because Bonanzle is rife with rich Javascript, and we use CSS-based layouts, few of our 30-ish pages were IE-compatible at the start of our backport. Many of our most substantial pages could not even render in IE without spewing 10+ JS errors.

But with a couple tools and rules, the process of moving toward IE compatibility ended up becoming relatively straightforward, and even our most complex and nuanced functionality has now been coerced into IE compliance.

The most important lesson I would impart to aspiring web applications: use a Javascript library. Like all young Rails sites, we started with Prototype. Once we were ready to take the training wheels off, we started using jQuery. Our backport revealed that only about half of our handwritten JS code worked in IE without modification. Some but not all of our Prototype worked. And almost all the jQuery did.

There are plenty of pages already out on the web dedicated to the comparison of jQuery to Prototype, but suffice to say for our purposes, my relationship with Javascript was an antagonistic one until I met jQuery. Now, I almost look forward to writing JS. Being able to batch select elements using pure CSS selectors, not needing to check for nulls when accessing selectors, and being able to concisely make complex behaviors happen with concatenated method calls are all big reasons. The rich plugin architecture is an even bigger reason. There seemed to be no task too large or small for a cross-browser jQuery plugin. Some of our heavily utilized plugins included the drag and drop ui-*.js, the jqModal plugin, and the jquery delegate plugin. We also worked with a contractor who custom-wrote some jQuery plugins for us that, like all jQuery I’ve encountered, “just worked” in IE (well, after they worked in Firefox, but that’s easy to make happen with Firebug).

If you do choose to go the jQuery route in writing your site, do yourself a favor and look into the JS QueueSpring plugin — it’s discussed in the previous blog, and is ideal for binding jQuery behaviors with your DOM elements in a clean and fast-loading way.
As far as CSS goes, there is no easy way to sum up means by which to write CSS that is IE6/7 compatible. I think that for all but the most experienced web developers, it is an iterative process. What I can recommend are some tools to speed up your iterations. First of all, if you’re on Windows, you’ll need to be able to install the version of IE you don’t have (6 or 7). This is most easily done by downloading the IE virtual machines Microsoft provides on their site. These provide an out-of-the-box solution for running IE6 and IE7 side-by-side on your machine (not otherwise possible). They also come bundled with some of the best tools available for figuring out what you’re looking at in IE: the Web Developer toolbar and the Script debugger. The former is basically a wussy version of Firebug that allows you to mouse over elements and see their properties, but not modify those properties dynamically, the way Firebug allows. The latter is a fairly lame way to see what’s going on in your JS when IE encounters errors. For both IE6 and IE7, you’ll need to ensure that you allow Script Debugging, which is under Tools -> Internet Options -> Advanced.

If you’ve got a big project on your hands, you’ll probably find the Script Debugger to be too barren… from what I’ve seen, it doesn’t allow you to set breakpoints in an arbitrary file, it has no watch window, and you can’t edit code from within it. A better choice is to install Visual Studio and use that as your JS debugger. If you don’t have it, you can download a free, “text only” version of Visual Studio with Ruby in Steel. You can then uninstall the RiS if it’s not your cup of tea (though it should be), leaving Visual Studio installed.

That’s the basic framework of what we’ve used to get our site from IE crashfest to lovable huggable puppy dog. Hopefully this may start off you other intrepid cross-browser souls on your journey as well. May you be strong, and repeat after me… “only 12-20 months until IE6 is obsolete.”

Rails Plugin: Javascript Unqueue Spring, all Javascript output at page bottom

I’m releasing my second plugin, a simple library for being able to declare Javascript functions or executable code from anywhere within a template or partial file, and have it all output together in a lump at the bottom of your page.

Why?

It offers the following advantages over simply using javascript_tag:

  • Most (all?) browsers pause rendering when they encounter inline JS, so having JS (especially complicated JS) that is scattered throughout your code means your page won’t be visible until said code runs. That’s no fun.
  • Race conditions. If you declare code inside a javascript_tag that refers to a DOM element that isn’t yet instantiated, you’re in for some trouble. Using Javascript queuing, your JS code is assured not to run until the document is ready (i.e., all page elements have been loaded)
  • Easier to find JS code in Firebug if it’s all in one place at the end of the page
  • Best practices. It’s commonly accepted that inline JS is bad bad mojo. Ask Google if you don’t believe me.

Usage

Pretty damn simple.

<% queue_js do
<<-JAVASCRIPT_CODE
// Run an alert to show we exist:
alert("I go at the bottom of the page and run automatically!");

// Do some other senseless thing
$("a").click(function() { alert('An anchor was clicked!'); });
<<JAVASCRIPT_CODE
end %>

Using this plugin, this would be output by default (i.e., without changing JS to run using Prototype) as follows:

<body>
... (all the HTML rendered during your render pass) ...
</body>
<script>
//<!--[CDATA[
jQuery(function(){
alert("I go at the bottom of the page and run automatically!");
$("a").click(function() { alert('An anchor was clicked!'); });
});
//]]-->
</script>

You can open and reopen (i.e., call the queue_js method) as often as you like during your render call. All JS added during all calls will queued until the render pass finishes.

If you change to the Prototype option, the JS will be wrapped in an equivalent wrapper that Prototype uses to run JS after the DOM has been fully loaded. For specifics, see js_queue_spring.rb in the source code (hint: it uses Event.observe(window, ‘load’)).

Options

If you want to just add code that isn’t going to be automatically run, you can use the :flat parameter like so:

<% queue_js(:flat) do
<<-JAVASCRIPT_CODE
function someFunc() { alert("Doing nothin until you tell me to!"); }
<<JAVASCRIPT_CODE
end %>

You can also look in the /lib/js_queue_spring directory to specify whether you want to use jQuery (tested) or Prototype (untested) syntax to wrap your code that should run when the page has finished loading.

Limitations

Caching pages with queued JS is a problem. Since the JS is not output inline with the page, if you cache a partial with JS that is queued, that JS will not be re-rendered when the cached version of the partial is used. For partials that have cached content, you’re probably better off using javascript_tag.

Get it

From your base directory,

svn export http://jsqueuespring.googlecode.com/svn/trunk/ vendor/plugins/js_queue_spring

And that’s that. The plugin is configured by default to use jQuery’s “jQuery(function(){ }) ” to run the code after the document has loaded. If your site doesn’t use jQuery (well, it probably should, but) you can go into the /lib/js_queue_spring and change the “SPRING_WRAPPER_TYPE = :prototype”. Fair warning: I haven’t experimented much with using this as Prototype. If anyone uses this option, please report as to whether it works, and ideally, how you made it work.

A bit more info available at Google Code project. A bit more instruction also available in the plugins’ readme.

Rails Update Test Fixtures Upon Migration

Ever get the nagging feeling that everyone knows how to do some obvious thing except for you? This is the feeling I’ve long had with updating my Rails test fixtures. All self-respecting Rails developers know that a strong test infrastructure is a key aspect of any Rails application. And test fixtures are generally seen as the key component that drives test cases. But no Google query I’ve figured out yet has shown me a good way to keep my fixtures updated as my database changes. I’ll going to discuss our current working solution here.

What are fixtures? (aka: newb background information)

For newbs that might not have started their testing yet: test fixtures represent each table in your database as a single file (usually in YAML format) that contain specific, known records you can refer to in your tests.

For example, we have an items fixture that loads in a bunch of sample items records our tests to which our refer. A single record in this items YAML file looks something like:

items_not_valid_not_committed_missing_price:
shipping_price:
price:
title: "dummy item"
quantity:
shipping_id:
id: "71"
item_status_id:
category_id:
committed:
description:
seller_id: "3"
image_id:

Fixtures are usually created initially by exporting the data from one of your real database (development or production) running rake db:extract_fixtures. This will create a fixture file for every table, with the records in every table labeled something like “item_001”, “item_002”, etc. As you can see in the example above, my tendency is to rename these records so that they are more semantic. This makes it easier to remember which fixture record is which when loading them in my tests, by having syntax like item = items(‘items_not_valid_not_committed_missing_price‘). A lot easier to remember what that item represents later on, then having it named item = items(‘item_001’)… and then six months later asking “What was the item number 1 record again? Oh yeah! The not valid item because it was missing price! Of course.”

Sounds fine to me. So what’s the problem?

The problem is what happens when your database changes. Especially for an application in the midst of development, the database might change weekly. However, if you re-export your fixtures from the database, you’ll lose all of your custom-named fixture records. You’ll also lose any special cases you might have setup. In our case, we have records where we set stuff like “item_expires_at: <%= 1.day.from_now %>”, and that ain’t gonna fly if you re-export the database.

If you don’t re-export the database, though, then you are left with hundreds of records that are missing (or have extra) fields after your migration. What’s a developer to do?

The partial answer: The Rails Fixture Migration Plugin

This plugin, developed by Josh and Jake is a good start. After installing it, you can run “rake db:fixtures:migrate” and automatically have the fields that were added or removed in your migrations correspondingly added or removed from your fixtures. But there are a number of caveats:

* The code breaks if it finds empty fixtures. This can be fixed by wrapping lines 14-16 in migrate_fixtures.rb with an if so they only try to load non-empty fixtures
* The code evaluates and replaces any inline Ruby in your fixtures. So the previously mentioned “item_expires_at: <%= 1.day.from_now %>” will become “”item_expires_at: March 29, 2008” after migration. I’ve just worked around this by manually replacing those substitutions after running the migration. A bit more annoying is if you have code that does any loops in your fixtures. For example, we have a loop that creates 50 similar items in one of our fixtures, and the fixture migrator simply doesn’t understand what this code is doing, giving an exception when it tries to run it. For the time being, I just remove this loop before migration and re-add it after migration. A pain, surely, but less of a pain than adding new fields to the other 100 records in our items fixture.
* The fixture migration fails with some ambiguous error code if any of the fixtures it wants to migrate already exist. The fixture migration uses a schema_info.yml file in your /test/fixtures directory to keep track of the fixture migration. If you, for example, create a new scaffolding that creates both a migration file and a new fixture, the fixture migration tool will break when it gets to the migration to create the new table, because it seems the new table is already a fixture in your directory.

All in all, they are surmountable obstacles for us at this point, given the lack of other options.

The “better” answer: Uhh…

I’ll be curious to hear if other developers have a more elegant way to deal with the problem of writing tests that stay relevant as your database changes? One school of thought is that one could just write methods that create the records you need programmatically, avoiding fixtures altogether. This has crossed my mind, but fixtures are nice since they come pre-generated, and I think they are generally easier to read through than a hash that creates a new record. A bigger benefit of fixtures is that they are automatically placed in the test database, so you get all the interconnections between your tables loaded at once. It seems to me like it would be quite a headache to create an entire database of data programatically.

So, slightly painful fixture migrations it is, so far.

You can get your own copy of the slightly pain Rails migration plugin here.

Rails Optimize Google Maps Load Times

Ain’t it grand when you get to something on your list that has been getting put off for months, only to discover that somebody did it for you a few days ago?

Such was the case with us and our Google Maps. We had long intended to cache Google Maps (accessed through the YM4R plugin) on account of the 1-2 seconds per page load that the maps cost (!). Most of that time is spent running JS — both from Google (that grabs the map tiles and sets up zooming, scrolling, etc.) and from YM4R (don’t know what that JS is doing, but it’s costly).

Anyway, if you are putting maps on your page, and you don’t need the user to be able to move the map around, don’t be a fool: use static Google Maps and take your load time from 2 seconds to .02 seconds.

And being that we are in the midst of a Rails revolution, it should surprise no one that within 48 hours of Google unveiling the static map service (in late February) someone (<credit>John Wulff</credit>) had already written a gem to use it in Ruby. Unfortunately, gems suck, especially gems like this one with multiple gem dependencies. But there is no reason you can’t just get the functionality as a Rails plugin. Here is the file. All you have to do is put it in your /vendor/plugins directory under something like vendor/plugins/static_gmaps/lib and you’re golden. Usage for the static map is like so:

@map = StaticGmaps::Map.new(:center => [ @lat, @lng ],
:zoom => 10, # gmaps zoom level
:size => [ 120, 75 ], # pixel width, height
:map_type => :mobile, #:mobile or :roadmap, :mobile is a bit more distinct for small maps
:key => YOUR_GMAPS_API_KEY)

Then, you put it in an image tag:

image_tag(@map.url)

Now, I do say, that is easy way to cut out 2 seconds/page load.

Update: Almost forgot to mention one painful lesson I learned when implementing the static maps, which is that, unlike with the YM4R maps, Google does check your HTTP_REFERER when using static maps. So, to all you localhosts: don’t forget to go to Google Maps and grab an API key that works for the IP address in your HTTP_REFERER string. If your referer string doesn’t match your API key all you’ll get is a blank image and no explanation.