Having a large codebase means that we don’t upgrade our version of Rails very often (we’re averaging once every two years, with about 1-2 weeks of dev time per upgrade). Every time we do upgrade, though, one of the first things that I’m curious to inspect is the performance delta between versions.
For our previous upgrade, I documented our average action becoming about 2x slower when we moved from Rails 2.3 to Rails 3.0, with an action that had averaged 225ms climbing to 480ms. Luckily, in that episode we were able to pull out some tricks (GC tuning) such that we eventually got the same action down to 280ms. Still around 25% slower than Rails 2.3, even implementing fancy new tricks, but we could live with it.
When we finally decided we had to move from Rails 3.0 to 3.2 to remain compatible with newer gems, I was understandably anxious about what the performance drop was going to be based on our past experience. With the numbers now in hand, it looks like that apprehension was warranted. Here is the same action I profiled last time (our most common action – the one that displays an item), on Rails 3.0 before upgrade:
And here it is now:
The problem with 3.2 is that, unlike last time, we don’t have any more tricks to pull out of our hat. We’ve already upgraded to the latest and greatest Ruby 2.0. We’ve already disabled GC during requests (thanks Passenger!). When we made these upgrades, they sped up our Rails 3.0 app around 25%. That performance improvement has now been overshadowed by the 40% slower controller and view rendering we endure in Rails 3.2, making us slower than we were in 3.0 before our Ruby optimizations.
Suffice it to say, if you have a big app on Rails, you have probably learned at this point to fear new versions of Rails. I fully empathize with those who are forking over bucks for Rails LTS. If we didn’t need compatibility with new gems, staying on 2.3 would have left us about 100% faster than Rails 3.0, which in turn is about 40% faster than Rails 3.2.
New Rails trumpets improvements like “ability to build single-page web apps” and “tighter security defaults” and “streamlining, simplifying” the constituent libraries. The closest we’ve seen to a performance improvement lately was that 3.2 made loading in development faster (1). This was certainly a fabulous improvement (took our average dev page load from 5+ seconds to 1-2), albeit one we already had in Rails 3.0 thanks to active_reload.
My sense is that performance has become the least of the concerns driving Rails development these days, which, if true, is a shame. If Rails put equal time into analyzing/improving performance as it does to “streamlining, simplifying,” it’s hard to believe that we would keep swallowing 40%-100% performance setbacks with each release. Maybe a partnership with New Relic could help the Rails team to see the real world impact of their decisions on the actual apps being built with their platform? If others’ experience is similar to ours, that would be a lot of pain felt by a lot of people.
I admit I’m a bit reluctant to make this post, because Rails has given so much to us as a platform, and our business is too small at this point to be directly involved in improving performance within Rails. We will, however, continue to post any salient optimizations that we discover to this blog and elsewhere.
My primary concern though, and the reason I am posting this, is that if Rails keeps slowing down at the rate it has, it makes me wonder if there will be a “point of no return” in the 4.x or 5.x series where it simply becomes too slow for us to be able to upgrade anymore. Each new release we’ve followed has been another step toward that possibility, even as we buy ever-faster servers and implement ever-more elaborate optimizations to the compiler.
Has anyone else out there upgraded a medium-to-large webapp from Rails 2 -> 3 -> 4? I’d be very curious to hear your experience? The lack of results when Googling for “Rails performance” has always left me wanting for more details on other developers upgrade experiences.
(1) New caching models may improve performance as well in some scenarios, as could the dynamic streaming when used with compatible web servers. For the purposes of this post I’m focusing on “performance” as it pertains to dynamic web apps that run on a server, which means stuff like interpreting requests, interacting with the database, and rendering responses.
16 Replies to “Rails 3.2 Performance: Another Step Slower”
What’s strange is I just took a 3.0 app and upgraded it to 3.2 and saw a significant performance /increase/, whereas you’ve seen the opposite.
With the information provided, it would be difficult to pin-point anything in particular as the source of the performance regression. However, even though you mentioned you can’t dedicate the engineering resources to find the source of (and fix) the problems, it would still be super helpful if you can contribute detailed metrics at https://github.com/rails/rails/issues.
There is an offer of help and some discussion here:
I’ve seen some very good performance improvements with the preview releases of ruby 2.1.0. You may want to check that out.
The Rails devs certainly are concerned about performance, but in some ways they are hamstrung by the limitations of Ruby itself. As evidence, see this recent pull request: https://github.com/rails/rails/pull/12879. Keep in mind that Ruby was not designed as a high performance language from the outset, rather it was intended to make developers happy. Rails has always made the compromise that CPU time is cheaper than developer time and they’ve made it fast to develop software at the cost of CPU cycles.
Amen! Same here, we have been hunting down GC problems and fixing all kinds of stuff to get our performance back (going from 2.3 straight to 3.2).
For us 3.0 was slower than 3.2 (so slow that we could not even run the app on rails 3.0)
This is very sad that a framework which is awesome to develop on is getting slower. I think using JRuby or Rubinius can make the framework faster.
I appreciate those that are willing to pitch in a hand and help. FWIW we’ll be making our own effort to optimize and figure out how we might improve our situation. The challenge that I foresee in this is that the cause of these slowdowns can often be so endemic that one can’t attribute them to a particular cause. That is, when I have used performance tracking tools in the past, I usually end up with something like 100k string allocations, 1500 request handlers instantiated, 3000 ActiveRecord objects, etc. There’s usually no smoking gun in cases like these, just death by 1000 paper cuts.
Which means we either have to dig in for a long haul of performance debugging (which we scarcely have time for) or just put up with it and look for hackarounds like GC tuning. Usually the latter ends up being the most pragmatic approach for our situation.
It’s 2013. Please leave Rails and PHP behind and move to something based on node or python. We moved our legacy apps from Rails to Flask and aren’t looking back. We should have done this a long time ago.
Stockholm Syndrome and sunk cost fallacies are brutal psychologically, but at some point you need to cut your losses and abandon the broken technologies of yesterday.
I have my first blog comment troller! America, I have arrived.
Why not to switch to Sinatra instead of Flask? You compare uncomparable things.
Platforms are constantly changing, hopefully the architecture of your systems should allow porting to other platforms without too much headache.
I wonder if the cost incurred as a result of version upgrades is more or equal to migrating to a different platform, (assuming you are still using the same DB back-end)?
Perhaps its time to look at Scala and the JVM platform?
> Please leave Rails and PHP behind and move to something based on node or python.
No! Skip node, and python, they are so yesterday, and move directly to lua+nginx. And consider writing some parts of the code in go. Also skip js, and go with dart, 🙂
The point is: everything sucks, but in different (sometimes same) ways.
Those are some really interesting results, although I always take metrics with a grain of salt. I would check with New Relic (those look like New Relic graphs) and see if they have seen a similar spike across all RoR apps that they monitor. I am not a RoR developer, but could you look at running your app on the JVM to get a performance boost there?
Anyways, I know what it feels like to mistrust the technology you are using and it puts you in a really tough spot. Good luck.
A lot can be done for performance with ruby/rails. In our 3.2 application we have a whole set of monkey-patches dedicated to either backporting fixes from 4, excising some unused things that slow down requests/do unnecessary allocations.
The i18n gem is not multi-processing-friendly (i.e. passenger) because “string”.hash returns different hashes for each process since they’re salted to prevent DoS attacks. That in turn prevents proper memcached-caching of the lookups. So we had to patch the i18n gem.
Ah yes, we also patched the memcached store because the local cache strategy was not working with nil values, which is important for negative lookups.
We also compile our custom ruby with -march=native and -03, which yields an extra few percent performance.
Pile on the usual GC environment variable tuning…
All in all we got down from 400ms to 50ms requests on some pages. Performance is possible, but it takes a lot of gruntwork. And don’t expect to reach google’s <10ms response times.
+1 for JRuby. The much more mature GC combined with the better multi-tasking of the JVM can help heal the 1000 papercuts you’re facing.
Hey; New Relic employee, here. From my understanding, we actually did have a partnership with the Rails core team. We allow our customers to specify whether or not they’d like to anonymously submit their Rails performance data to Rails Core, but it’s also my understanding that they don’t really bother using this data anymore. They should be able to confirm or deny this.
Just ignore trolls.
I would expect this time to be constant or drop a bit in new releases. Maybe even to grow a bit, but not so much as you have experienced. What I see with PHP/Python for instance, is that it keeps getting not only better but faster with each release.
I don’t know how big your app is, but I wonder if it’s bigger than Basecamp and other very big rails apps, and also I wonder what these guys do to keep a good speed.
Then, you might want to trade some experience with Discourse developers, even though they are probably using much newer version. They are even using some ruby patched versions to speed up, as you can see here: http://meta.discourse.org/t/tuning-ruby-and-rails-for-discourse/4126
However, in real-time applications, maybe the way to go is to migrate to a faster/compiled language. Probably not your case, but on some edge cases, that’s the only alternative. For twitter, it made their fail wale disappear.