For my own reference, this is a piece o cake:
http://ubuntuforums.org/showthread.php?t=261366

Somebody has to do something, and it's just incredibly pathetic that it has to be us.
For my own reference, this is a piece o cake:
http://ubuntuforums.org/showthread.php?t=261366
Ugh, just spent an hour traveling from forum to forum trying to figure out why I couldn’t CTRL-C in Rubymine and CTRL-SHIFT-V into terminal. As many forum posters were eager to point out, it is possible to use CTRL-SHIFT-INSERT or middle click to paste into terminal, just not CTRL-SHIFT-V. Unfortunately, those workarounds were not OK since my insert key is in the arctic circle of my keyboard, and I don’t want to touch the mouse.
Luckily, the folks at Rubymine helped me figure out the answer where Google couldn’t.
The problem is that when running Rubymine via OpenJDK, the clilpboard used is not compatible with the clipboard used in terminal. The solution was to run Rubymine with Oracle Java instead. In Rubymine, this is a matter of first installing Oracle Java (I’ll let you Google that one, you have to add a repository) and then adding the following lines to the beginning of your Rubymine startup script:
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export JDK_HOME=$JAVA_HOME
export RUBYMINE_JDK=$JAVA_HOME
After that you should be golden. In my hour of Googling I read that many other IDEs (Netbeans in particular) seem to be subject to the same problem with CTRL-SHIFT-V not working. I’d reckon that if these users were to change their application to use Oracle Java it would probably resolve the problem in other IDEs as well.
I’m all for Mint Linux making some bucks via their Chrome and Firefox searchers, but not if it comes at the expense of basic usability. <quickie rant> If I were the Mint maintainers, I’d take a long look at whether it was desirable (let alone essential) that they hijack my CTRL-K functionality and replace standard Google results with their poorly formatted, functionality-impaired substitute.</quickie rant>
Anyhow, if you are here, you’re probably trying to figure out how to remove the Mint branded search from Firefox and/or Chrome. And I’m here to tell you how.
There are probably an assortment of ways to accomplish this. I chose to Google “Chrome deb package” which led me to Google’s official distributions of Chrome, which can be found here. After following Google’s instructions to install my Chrome package, all was well (though that meant that I was running “Chrome” rather than “Chromium.” Whatevskis.)
Other than the annoying search stuff, so far Mint Linux seems to be an easy-to-setup iteration on the developer utopia that Ubuntu was built as, before it decided to go the way of the mandatory Unity.
Short and sweet. You want to list local variables in ruby-debug? Try
info locals
You can also get ruby-debug to list your stack, instance variables, and much more. Type in plain old “info” into debugger to see a full list of what ruby-debug is willing to reveal to you.
Anyone who has followed my blogs over the last couple years knows that I’m a very big fan of the like/dislike list. But I generally try to exclude products from my lists since they don’t have that “essence of life” quality that I’ve strived for in my lists.
But products are important, too. So here you have it: a like/dislike list dedicated to the products I use or have used. I’ll actually split this particular list into four levels of like because I can quantify more precision when it comes to products.
Love
Like
Deeply Divided
Dislike
Despise
Not to kick a company when it’s (literally) down, but today’s 12+ hours EC2 outage has finally driven me to write a blog post I’ve been holding in my head for several months now: comparing a few of the major players within today’s Rails hosting ecosystem. In this post, I’ll compare and contrast EC2, Heroku, and Blue Box Group. I’ve chosen these three not only because of their popularity, but also because I believe each has a value proposition distinct from the other two, which which makes each ideally suited for different types of customers. In the interests of full disclosure, we are a current Blue Box Group customer, but we have spent a great deal of time looking at our choices, and I think that all have their advantages in specific situations. The facts and opinions below are the result of weighing data gleaned from Googling, popular opinion, and three years running a Rails site that has grown to serve more than 2m uniques per month.
Let’s start by establishing a pricing context. We’ll use a dedicated (not-shared resource/virtual) 8 GB, 8 core server with 1TB bandwidth as our baseline, since it is available across all three services (with some caveats, mentioned below).
| Heroku | EC2 (High-CPU XL) | BBG | |
| Dedicated 8-Core Server with 8 GB | $800.00 | $490.00 | $799.00* |
| 1 TB Bandwidth | $0.00 | $125.00 | $0.00 |
| Total | $800.00 | $615.00-695.00** | $799.00 |
* In the case of BBG, their most current prices aren’t on their pricing page. They should fix that.
** This doesn’t include I/O costs for Amazon EBS. While these are fairly impossible to predict (varying greatly from app to app), it sounds from Amazon like you’d be talking about something more than $40 for this. Give that we’re comparing a “high end machine” here, perhaps $80 might be a more accurate estimate, that would make the price more like $700.
Various minor approximations were made to try to get this as close to apples-to-apples as possible, but the biggest caveat is that the Heroku instance (Ika) only has about 25% the CPU as the EC2 and BBG instances (though it has the same amount of memory. They don’t configure their DBs with comparable CPU muscle). The next highest Heroku instance (Zilla) is $1600 per month, and more comparable to the other two in terms of CPU, but has twice as much memory as they do. Note that EC2 and BBG make offer discounts when committing to a year of service — I couldn’t find a comparable offer from Heroku, which is not to say that it doesn’t exist (readers?). These discounts typically range from 10-25% off the no-commitment price.
Heroku is ridiculously get started with, the runaway winner of the bunch when it comes to hitting the ground running with zero red tape. Per their homepage, all you do is run a couple rake commands and you’re in business. Even cooler, they offer a vast and useful collection of add-ons to make it easy to get started on whatever the specific thing is that you app is supposed to do.
Setting up Rails with EC2 is not quite the same walk in the park, but it’s not necessarily bad. Amazon handles configuring the OS for you, so in terms of getting your app server setup, you are essentially just getting Ruby and Rubygems installed, and letting Bundler take care of the rest. If you managed to set up your development environment in Linux or on a Mac, chances are you won’t have too much trouble using packages to fill in the gaps for other non-Ruby elements to your application (like Sphinx). When EC2 gets trickier is when you start figuring out how to integrate EBS (Amazon Elastic Block storage, necessary for data that you don’t want to disappear) and the other 20 Amazon web services that you may or may not want/need to use to run your app. It can ultimately amount to quite a research project to figure out which tools you want, which ones you need, and how to tie them all together. That said, you may end up using these tools (S3 in particular) even if you use BBG or Heroku, so that cost is not entirely unique to using EC2.
Ease of getting started on Blue Box is somewhere in between EC2 and Heroku. There is no high-tech set of tools that automatically build stuff like Heroku, but unlike EC2, you have a friendly and qualified team willing to help you get your server setup in the best possible way. In my experience, when they have setup new servers, they will ask in advance how we plan to use the server, and then automatically handle getting all of the baseline stuff we’ll need installed such that we can just focus on deploying our app. Which brings me to Round 3…
For pet projects, small sites, or newly started sites, I think that hosting with Heroku is a no-brainer. You can be up and running immediately, you get a huge variety of conveniences with their add-ons, and there is a wealth of Google help behind you if you should happen to encounter an trouble, given the immense user base Heroku has managed to establish. All three of these these services can scale up available hardware within hours/minutes-not-days (yay for clouds!), but Heroku is probably the most straightforward to rapidly grow an application with their “Dynos” approach. However, given their highest cost amongst the choices, and their lower-than-BBG application-specific support, the significance of those advantages will erode as your application grows into the 10’s of thousands of monthly visitors.
I believe that EC2’s greatest selling point is its price, with its scalability and ubiquity (= generally good Googlability) being close seconds. As detailed above, on balance, EC2 tends to run 20-30% cheaper than other choices by leveraging their immense scale. Nifty features like auto-scale have the promise of making instant growth possible if you get flooded after your Oprah appearance. The trade-off for those advantages is that you will get 0 application-specific support, and even getting generic system-wide support can be hit-and-miss, as folks who suffered through today’s EC2 outage learned firsthand. Transparency is not Amazon’s strong suit at this point in their evolution, which can be a real problem if you have real customers who depend on your product and want to know when they can expect to see your website again during an outage. Also, as mentioned in the setup section, figuring out your way around the Amazon product ecosystem can be dauting at first.
I would consider EC2 the best choice for intermediate-sized businesses, particularly if 100% uptime is not imperative to their existence. EC2 is a great option for bootstrapped startups who want to get online as cheaply as possible, and are willing to put in the extra work setting up their servers in exchange for those cost savings. Also, since you will probably be unclear about what kind of resources your app is going to consume as it scales, EC2 is a great proving ground to get a sense for what kind of resources you might need if you decide to venture beyond Amazon for improved reliability and service. I would also take a long look at EC2 for huge businesses that can afford their own IT department, which diminishes the significance of EC2’s lack of application-specific support or monitoring.
While their prices are competitive with EC2, I would assert that the real differentiator with Blue Box is their focus on service. Included by default for business-sized BBG customers is 24/7 pro-active human monitoring of all services, including the ability to bring servers back online if they should happen to crash and you’re not around. Having gone through a fair number of web hosts in our day, we we have come to realize that, once you are signed up at a given host, it can be a huge pain to change. Most hosts use this knowledge to their advantage, and after a very romantic honeymoon period, become inattentive to their customer’s needs after it becomes clear the customer would be hard-pressed to move.
At Blue Box, their customer-focused attitude has not diminished a bit over time. We still regularly find them answering our questions…on the weekend…within minutes of the question being sent. Equally important, Jesse Proudman (BBG CEO) has built a team around him that gives the customer the benefit of the doubt. In more than a year of being hosted at BBG, I can not ever remember them “blaming” us for server changes we’ve made that have caused havoc (not the case at some of our past hosts). Instead, BBG has a solution-focused team that is consistently personable, reasonable, and most importantly, effective when it comes to solving tricky application and server problems.
While BBG offers small VPS instances, as well as cloud servers that can quickly scale, I consider their sweet spot to be businesses that have grown beyond the point of being able to easily maintain their server cluster themselves, but they don’t want to hire an on-staff IT guy. Or maybe they do have an IT guy, but they really need two. Over the past couple years, BBG has been our “IT guy,” working to implement systems for us ranging from a Mediawiki server, to load balancers, to Mysql master-master failover clusters. And compared to having a real IT guy on staff, the price is a huge bargain (not to mention the savings on health insurance, taxes, etc.)
Another nice benefit for those that have been stung by EC2/Heroku uptime hiccups: in 20 months with BBG, our total system downtime has been something between 1-2 hours (excluding downtime caused by our mistakes).
The best host for a particular Rails app depends on a number of factors, including phase of development (pre-launch? newly launched? rapidly growing? already huge?), need for 100% uptime, makeup of team, and cash available. Hopefully this post will be helpful to someone trying to figure out which host makes most sense for their unique situation. Please do post any questions or anecdotes from your hosting experience for future Google visitors.
Blue Box emailed me after this post, with a few extra details that I believe are pertinent:
If anyone from Amazon or Heroku would like to provide extra details of what makes them a strong choice, I’d be only too happy to post those as well.
Contrary to the pages of complex hand-written recursive methods I found on StackOverflow when Googling this, it is actually as simple as
noko = Nokogiri::XML("my_noko_file.xml")
parent_node = noko.root.xpath("//MyNodeName")
children_named_floyd = parent_node.xpath(".//Floyd")
If you want to search on more complex criteria, you can also add in extra sauce to your xpath.
noko = Nokogiri::XML("my_noko_file.xml")
# Searches your entire XML tree for an XML node type "MyNodeName" that has an attribute "id" set to a value of '1234'
# Then grabs the XML node of type "Something" from within the found NodeSet
parent_node = noko.root.xpath("//MyNodeName[@id='1234']").at("Something")
# Grab all children of the "Something" node that are of type "Floyd"
children_named_floyd = parent_node.xpath(".//Floyd")
Nokogiri is a great gem. But I do often wish it’s docs had more examples and less byzantine explanations for common operations like these. But in the meantime, let’s hope Google will continue to fill in the gaps.
In the pre-Rails 3 ecosystem, there were a number of confusingly similar choices for getting master/slave database functionality established. These options included Masochism, DB Charmer, master_slave_adapter, and seamless_database_pool, amongst others. When it came time from Bonanza to make its choice on which slave plugin to use, I made my best effort to assess the velocity and functionality of each of the prominent slave database solutions, and wrote what went on to become a fairly popular post comparing the relative strengths of each choice.
Fast forward to Rails 3, and the field has narrowed considerably. Most all of the top Google results for Rails slave database options these days point to Octopus, and with good reason. Its documentation is sound, and its github project has maintained good velocity for the better part of the past year. Reading between the lines of the Octopus documentation, it would seem that it was built first and foremost as a tool to make it stupidly easy to shard databases; secondarily, it also supports using slave databases in a non-sharding format, but the implementation here gets a little more sketchy, as the examples show users needing to explicitly declare a given slave database for a particular query. In the documentation, this is done at query time, e.g.,
User.where(:name => "Thiago").limit(3).using(:slave_one)
or
Octopus.using(:slave_two) do
User.create(:name => "Mike")
end
Upon learning about octopus, my natural inclination was to compare it to our current solution, seamless_database_pool. Admittedly, when we got to the Rails 3 party, SDP was running a bit behind. The author had been kind enough to do much of the legwork to get it compliant with AR3, but we still encountered errors actually trying to use the plugin within controllers and views the way we had been able to with the previous version.
So I fixed it.
What Seamless Database Pool now represents is a slave database plugin that is specifically built with the purpose of making it as easy as possible to A) connect to one or more weighted slave databases B) declare whether a particular Rails action should attempt to use slaves, masters or both (automatically defaulting to the master when write operations occur) and C) gracefully handle failover if one or more of the slave databases declared should become unavailable for whatever reason.
SDP does not have any built in support for sharding, so if that is what your DB needs, Octopus is your best bet. But if what you need is specifically a Rails 3 supported solution that will allow you to connect mix and match your main database and N number of slaves, in a weighted way and with failover automatically baked in, this is where seamless_database_pool really shines.
Bonanza has been using SDP in production for more than a year now, and in the meantime have experience failures of our slave database every few months, which at one point what have brought down the entire site. Now, within seconds, Rails figure out that it needs to re-route requests and finds a database it can use that is still available. The still-good SDP documenation describes how to make it happen.
Prior to writing this blog, if you Google master/slave database you would probably come away thinking there was only one solution, and that solution was only secondarily focused on allowing N slaves to be configured. I may be wrong about the level of support that Octopus already had for setting up multiple weighted failover slaves (and being able to declare usage of these on a per-action vs. per-query basis), but the documentation makes me think that this is at best a future roadmap feature. In the meantime, if it’s specifically database support you need, try the drag-and-droppable SDP gem. I will continue linking my fork of the project until the original author decides what he wants to do with my pull request (which fixes fundamental issues with Rails 3 controller integration, plus adds more robust slave failover).
Is as easy as possible. In your bundler Gemfile:
gem “seamless_database_pool”, :git => “git://github.com/wbharding/seamless_database_pool.git”
Your database.yml file will then look something like:
production:
adapter: seamless_database_pool
port: 3306
username: app_user
password: app_pass
pool_adapter: mysql
master:
host: 1.2.3.4
pool_weight: 0 # 0 means we only use master for writes if the controller action has been setup to use slaves
read_pool:
- host: 2.3.4.5
username: slave_login
password: slave_pass
Do drop a line in the comments with any questions or feedback if you have experience with either SDP or Octopus as solutions for Rails slave database support!
So you’ve upgraded from Rails 2.x to Rails 3 and you’re not happy with the response times you’re seeing? Or you are considering the upgrade, but don’t want to close your eyes and step into the great unknown without having some idea what to expect from a performance standpoint? Well, here’s what.
Prior to upgrading, this action (one of the most commonly called in our application) averaged 225-250 ms.

One our first day after upgrading, we found the same unmodified action averaging 480ms. Not cool.
Thus began the investigative project. Since New Relic was not working with Rails 3 views this week (seriously. that’s a whole different story), I (sigh) headed into the production logs, which I was happy to discover actually broke out execution times by partial. But there seemed to be an annoyingly inconsistent blip every time we called this action, where one of the partials (which varied from action to action) would have would have something like 250-300 ms allocated to it.
I casually mentioned this annoyance to Mr. Awesome (aka Jordan), who speculated that it could have something to do with garbage collection. I’d heard of Ruby GC issues from time to time in the past, but never paid them much mind since I assumed that, since we were already using Ruby Enterprise Edition, the defaults would likely be fine enough. But given my lack of other options, I decided to start the investigation. That’s when I discovered this document from those otherworldly documenters at Phusion, describing the memory settings that “Twitter uses” (quoted because it is probably years old) to run their app. Running low on alternatives, I gave it a shot. Here were our results:
New average time: 280ms. Now that is change we can believe in! Compared with the default REE, we were running more than 40% faster, practically back to our 2.2 levels. (Bored of reading this post and want to go implement these changes immediately yourself? This is a great blog describing how. Thanks, random internet dude with broken commenting system!)
That was good. But it inspired Jordan to start tracking our garbage collection time from directly within New Relic (I don’t have a link to help ya there, but Google it — I assure you it’s possible, and awesome), and we discovered that even with these changes, we were still spending a good 25-30% of our time collecting garbage in our app (an improvement over the 50-60% from when we initially launched, but still). I wondered if we could get rid of GC altogether by pushing our garbage collection to happen between requests rather than during them?
Because every director will tell you that audiences love a good cliffhanger, I’ll leave that question for the reader to consider. Hint: after exploring the possibility, our action is now faster in Rails 3 than it had been in Rails 2. It’s all about the garbage, baby.
We ran into a problem today where a single row in one of our tables seemed to get stuck in a state where any query that tried to update it would hit our lock wait timeout of 50 seconds. I googled and googled to try to figure out a straightforward way to release this lock, but the closest thing I could find was a big assed Mysql page on table locks that lacked any specific solutions and this Stack Overflow post that suggests fixing a similar problem by dropping the entire table and re-importing it (uh, no thanks).
After some trial and error, I came up with two viable ways to track down and fix this problem.
The first way is to actually look at Mysql’s innodb status by logging into your Mysql server and running
show innodb status\G
This will list any known locks by Mysql and what it’s trying to do about them. In our case, the locked row did not show up in the innodb pool status, so instead I executed
show processlist;
This listed everything that currently had a connection open to Mysql, and how long it’s connection has been open for. In Rails it is a bit hard to spot which connection might be the one to blame, since every Rails instance leaves its connection open whether it is waiting for a transaction to complete or it is doing nothing. In today’s case, I happened to have a good hunch about which of the 50 connections might be the problem one (even though it was listed as being in the “sleep” state…strangely), so I killed it by restarting the server, and all was well. However, I could have also killed it using:
kill process [process id];
If you don’t happen to know which of your processes has done the lock, the only recourse I know would be to restart your servers and see which Mysql processes remain open after the servers have reset their connections. If a process stays connected when it’s parent has left, then it is your enemy, and it must be put down. Hope this methodology helps someone and/or my future self.