Linux touchpad mini-update

A few tiny notes from the land of Linux touchpad improvements:

  1. If you’d like to ensure you’re notified when I have substantive updates on the touchpad driver, I’ve setup a subscription list here: https://tinyletter.com/inedibill. It will be used only for sending updates about the touchpad driver, so probably no more than one email every couplefew months
  2. Matt Mayfield has been doing hero’s work on ensuring that Linux touchpad driver can successfully discard thumb input. His commits to this end are in this branch. In using the libinput debug tool, it’s apparent that Matt’s branch is excellent at allowing a second finger to rest on the touchpad while still taking the active finger input as cursor movement, whereas the latest libinput release tends to interpret any cursor movement with an extra finger down as scrolling. We’re working to make it possible to download an installable version of Matt’s changes soon.
  3. I’ve begun the process of attempting to enumerate the acceleration differences between Linux and Mac here. If anyone else wants to take a stab at concisely describing how acceleration differs on macOS vs Linux touchpad, feel free to drop me a line at bill -at- staticobject.com and I’ll aim to incorporate your findings.

Hoping to have a more robust update on this in the next couple months, or when we get an installable driver available (if sooner).

What happens if two Google Adwords auto targets have the same bid?

The title of this blog came up recently as we continue to optimize our Google Shopping Adwords bidding tool, and I wanted to share my learnings with future web searchers.    Our situation is that we use Google Shopping PLAs to drive traffic to Bonanza, and our bid amounts are specified via Adwords labels that we apply to each item.  Most items will have multiple Adwords labels, where each Adwords label corresponds to an Adwords auto target.  My question to Google was:  when there is one product that has multiple Adwords labels, and those labels correspond to auto targets that have the same bid, which auto target gets credit for the impression (and subsequent click/conversion)?

The answer, straight from Google, makes a lot of sense:

So the one that enters the auction will be attributed with the clicks and impressions and this is dependent on the performance history associated with that auto target. The one with the stronger performance history – clicks and CTR attributed to it, will enter the auction and hence get the impressions.

Additionally if they are from different ad groups – the past ad group performance history and ad group level CTR would also matter.

Thus the answer:  whichever auto target performs best has the best chance of being shown in Google’s Adwords “auction” (the name they give to the process of choosing which Adword or GS products to show).

Rails 3 Performance: Abysmal to Good to Great

So you’ve upgraded from Rails 2.x to Rails 3 and you’re not happy with the response times you’re seeing? Or you are considering the upgrade, but don’t want to close your eyes and step into the great unknown without having some idea what to expect from a performance standpoint? Well, here’s what.

Performance in Rails 2.2

Prior to upgrading, this action (one of the most commonly called in our application) averaged 225-250 ms.

Rails 3.0.4 Day 1

One our first day after upgrading, we found the same unmodified action averaging 480ms. Not cool.

Thus began the investigative project. Since New Relic was not working with Rails 3 views this week (seriously. that’s a whole different story), I (sigh) headed into the production logs, which I was happy to discover actually broke out execution times by partial. But there seemed to be an annoyingly inconsistent blip every time we called this action, where one of the partials (which varied from action to action) would have would have something like 250-300 ms allocated to it.

I casually mentioned this annoyance to Mr. Awesome (aka Jordan), who speculated that it could have something to do with garbage collection. I’d heard of Ruby GC issues from time to time in the past, but never paid them much mind since I assumed that, since we were already using Ruby Enterprise Edition, the defaults would likely be fine enough. But given my lack of other options, I decided to start the investigation. That’s when I discovered this document from those otherworldly documenters at Phusion, describing the memory settings that “Twitter uses” (quoted because it is probably years old) to run their app. Running low on alternatives, I gave it a shot. Here were our results:

304 Twitter mem settings

New average time: 280ms. Now that is change we can believe in! Compared with the default REE, we were running more than 40% faster, practically back to our 2.2 levels. (Bored of reading this post and want to go implement these changes immediately yourself? This is a great blog describing how. Thanks, random internet dude with broken commenting system!)

That was good. But it inspired Jordan to start tracking our garbage collection time from directly within New Relic (I don’t have a link to help ya there, but Google it — I assure you it’s possible, and awesome), and we discovered that even with these changes, we were still spending a good 25-30% of our time collecting garbage in our app (an improvement over the 50-60% from when we initially launched, but still). I wondered if we could get rid of GC altogether by pushing our garbage collection to happen between requests rather than during them?

Because every director will tell you that audiences love a good cliffhanger, I’ll leave that question for the reader to consider. Hint: after exploring the possibility, our action is now faster in Rails 3 than it had been in Rails 2. It’s all about the garbage, baby.

Sphinx 0.9.9 Review, A Cautionary Tale

After my previous raves about Sphinx in general and Thinking Sphinx in particular, I was excited to get my hands on the new Sphinx 0.9.9 release that was finally made available at the beginning of December via the Sphinx Search site.

Given that our Sphinx usage is what I think would fall the “advanced cases” heading, I expected probably a day or two of upgrade headaches before we’d be back on track. Worth it, said I, for the potential to get working index merge, which could set the stage for indexes that happen more often than once every four hours (our current index takes about 3 hours, plus time to transfer files between the building Sphinx machine and the search daemon machines).

Alas, our upgrade did not go according to plan.

This Monkey Patches Going to Heaven

Given how prompt Pat Allen (creator of Thinking Sphinx) has been in addressing and fixing bugs in the past, I don’t doubt that many of our upgrade headaches from the TS side will probably be fixed soon (if not already, since I emailed him most of our issues). That said, we required about five monkey patches to get the most recent version of TS with 0.9.9 working the same as our previous TS with 0.9.8 did. The patches ranged from patching the total_entries method (if search can’t be completed it throws exception) to real time updates not working (via client#update), to searches that used passed a string to TS where it expected an int throwing an exception.

This does not include “expected” differences, such as the fact that search is now lazily evaluated, so if you previously wrapped your search statements in a begin-rescue block to catch possible errors, your paradigm needs to shift.

It also appears that the after_commit plugin bundled with TS 0.9.8 has been modified such that it does not available to models in our project by default. Never figured out a fix for that bug, since by the time I noticed it, I had also become aware of an even bigger 0.9.9 detriment: overall performance. Reviewing our New Relic stats since we updated to 0.9.9, we have found an across-the-board decrease of about 50% to our Sphinx calls. I parsed the Sphinx logs to try to ascertain if the slowness was spawning from Sphinx or TS, and it appears to be Sphinx as the main culprit.

Performance

TS 0.9.8

Considering 290227 searches.
Average time is 0.0318751770166445.
Median time is 0.005.
Top 10% average is 0.193613017710702 across 29022 queries

TS 0.9.9

Considering 843569 searches.
Average time is 0.0430074540435297.
Median time is 0.006.
Top 10% average is 0.286621461425376 across 84356 queries

Many of our queries take 0.00 or 0.01, so the median doesn’t look too much different between the two, but the average time (which is what New Relic picks up on) is 35% slower in Sphinx alone, and about 50% slower once all is said and done. An action on our site that does a Sphinx search for similar items (and nothing else) consistently averaged 200 ms for weeks before our upgrade, and has averaged almost exactly 300 ms for the week since the upgrade.

Conclusion: Proceed with Caution

Would be nice if I had more time to debug why this slowness has come about, but the bottom line for us is that, after spending about 3 days patching TS to get it to work in the first place, and with the “after_commit” anomaly still on our plate (not to mention overall memory usage increasing by about 20%), I have ultimately decided to return to TS 0.9.8 until such time that a release of Sphinx is made available that speaks directly to its performance compared to previous versions. I think the Sphinx team is doing a great job, but amongst juggling the numerous new features, it seems that performance testing relative to 0.9.8 didn’t make the final cut?

Or there could always be some terrible misconfiguration on our part. But given that we changed our configuration as little as possible in moving from 0.9.8->0.9.9, if we are screwing up, I would say it is for perfectly understandable reasons.

A three day window of a pure search action. First two days with TS 0.9.9 average 300, yesterday after reverting back to 0.9.8 about 200 ms
A three day window of a pure search action in our app. First two days with TS 0.9.9 average 300 ms, yesterday after reverting back to 0.9.8 about 200 ms

Javascript play sound (wav, MP3, etc) in one line

Alright, enough with the 10 page tutorials from 2006 describing detailed browser-specific implementation tricks to get sound playing. Google needs a refresh. In my experience in the 2010’s, here’s all you need to do to make a simple sound play in your web page:

Step one: add an element to your page

Step two: play a sound via that element

document.getElementById("sound_element").innerHTML= 
"";

Or, if you’re using jQuery:

$('#sound_element').html(
"");

I’ve tried it in Firefox, Chrome and IE and it works like a charm for me. I’d imagine that your user has to have some basic sound software installed in their computer, but at this point, I’d reckon 99% of users do.

Feel free to add to the comments if you find any embellishments necessary to get this working.

Can’t open file in Rubymine?

Just wasted an hour fighting a bug in Rubymine 1.1.1 that wouldn’t let me open files. Every once in awhile, it just decides that it doesn’t want to open files of a certain type (in my case, all .html.erb and .rhtml files), regardless of how you try to open them (click in file browser, choose File -> Open, etc).

And now, ladies and gentlemen, without further ado, the fix:

File -> Invalidate caches
Close and re-open Rubymine

Sigh.

Ubuntu/Nautilus Script to Upload to Application Servers

Here’s a fun one. If you’re running Nautilus (which you probably are, if you’re running vanilla Ubuntu), this will allow you to copy one or more assets to one or more app servers with a right click.

Firstly, you need to open the script:

vi ~/.gnome2/nautilus-scripts/"Upload to app servers"

Then paste:


file_count=0
dir="`echo $NAUTILUS_SCRIPT_CURRENT_URI | sed 's/file:\/\///'`"
for arg
do
full_file_or_dir="$dir/$arg"
relative_file_or_dir="`echo "$dir/$arg" | sed 's/\/path\/to\/local\/root\/directory\///'`"
rsync $full_file_or_dir loginuser@appserver.com:/path/to/remote/root/directory/$relative_file_or_dir
# Add other app servers here if you want...
file_count=$((file_count+1))
done
zenity --info --text "Uploaded $file_count files to app servers"

After saving the above example, when you right clicked a file in Nautilus, you’ll see a “Scripts” menu option, which, when expanded, would have an option called “Upload to app servers”.

Upon clicking that script, the relative path of the file in your /path/to/local/root/directory will be copied to the corresponding location in /path/to/remote/root/directory on your appserver.com server. You can select multiple files and upload all at once. After uploading, a zenity message will pop up telling you how many files were successfully copied (requires the zenity package, apt-get install zenity).

So for example, if you right clicked the file /path/to/local/root/directory/myfile.txt, and picked “Upload to app servers,” the file would automagically be copied to appserver.com as /path/to/remote/root/directory/myfile.txt.

This script has proved very helpful for uploading post-deploy fixes to multiple app servers without redeploying the whole lot of them.

Bonanzle: A Year One Retrospective, Part I

I just realized that Bonanzle’s one year anniversary (since beta launch) passed by a month ago without me taking a minute to chronicle the madness of it all. But I’ve never been much of one for hazy-eyed generalities (e.g., “starting a business is hard work,” “business plans are useless,” etc.), which is most likely what this blog would become if written in free form.

I much prefer hard facts and examples, the sort of thing that gets drawn out through the efforts of a skilled interviewer. Unfortunately, since Brian Williams isn’t returning my calls, we’re going to have to settle for a purely fictional interviewer, who I will name Patsy Brown, on account of the fact that she’s a brownnosing patsy that tosses me softballs to make me sound as smart as possible. Thanks a bunch, Patsy. Mind if I call you Pat? Oh, you’d love it if I called you that? Well, great! Let’s great started.

Pat: So, Bill, it has been an incredible year one for Bonanzle. It’s really something how, with no investment money, this creation has garnered more traction than sites with far more money and far more experience. Even more improbably, it’s been done with a team of two full timers and an ever-changing legion of contractors when comparable sites field teams of 10+ employees. How has this first year compared to your expectations?

Bill: Well thank you, Pat. That’s really nice of you to say. It has been pretty wild, hasn’t it? To be honest, there were very few moments in the early evolution of Bonanzle where I thought I knew what would happen next in terms of traffic, sales, or revenues (early interview tangent: I firmly believe the founders of Twitter had no idea what they were building at first either. I still remember in 2007 when they were advertising it as a means to let your friends know what you were doing on a minute-by-minute basis, which was a pretty dumb premise for a business, if you asked me or the other 3 people that were using it at the time. I am already dreading the inevitable declarations of genius that revisionist historians will bestow upon those founders in the months to come. Anyway, we now return to your regularly scheduled interview. And yes, I have no business interrupting this interview before the first paragraph has been finished). I mean, we spent more than a year building Bonanzle as a site that would compete with Craigslist, and it wasn’t until a couple months after we launched (in beta) that we realized that there was simply no market for a better Craigslist.

Once we figured that out and re-geared ourselves as a utopia for small sellers, the first few months were pretty unreal — growing 10-15% larger with every passing week. That was incredibly tough to manage, because at the time we’re increasing the load on our servers by a factor of about 2x-3x monthly, and I was still only learning how to program web sites. If I had set expectations, these early months would have certainly blown them out of the water.

Pat: Can you give us a story to get a sense of just how hectic those early months were?

Bill: Sure, Pat. One memorable Bonanzle moment for me was a week back in October, when I was housesitting for some friends. This was at a time when our traffic was starting to push into the hundreds of thousands of unique users, and our servers were in what I think could best be termed “perpetual meltdown mode.” I remember one particular night where I was up until 4 AM, fiendishly working on some improvements to our search so that it wouldn’t crash. The Olympics were on TV in the room, and I felt like I had an intimate bond with the athletes — I mean, it takes a certain type of insanity to workout for thousands of hours to become the best athlete in the world, and it takes a similar type of insanity to lock oneself up in a room for 12-14 hours per day and try to scale a site up to hundreds of thousands of visitors with no prior experience. Generally, a team of many experienced programmers is going to be required for that amount of traffic, but, being on the brink of going broke, that wasn’t an option. So I pried my eyelids open until I finished the search upgrades, and wearily made my way back to bed to get up early and repeat.

It turned out that that “getting up early” in this case was about three hours after I went to sleep, when I received a then-common automated phone call from our server monitoring center that Bonanzle was down. I dragged myself out of bed, slogged down to the hot computer room, and spent another couple hours figuring out what had gone wrong. When it was fixed, I turned the Olympics back on and basked in our shared absurdity at 8 AM that morning.

Pat: What were some of the key lessons you learned during those months?

Bill: Well, other than technical lessons, the most salient lesson was that, when you find a way to solve a legitimate pain, amazing things can happen. In our case, by building a marketplace as easy as Craigslist with a featureset that rivaled eBay’s, we had what seemed like every seller online telling us how relieved & empowered they felt to have discovered us. Then they told their friends via blogs and forums. It was heady stuff. Our item count rocketed in a way that we were told had never been seen amongst online marketplace not named “eBay,” as we shot from 0 listings to one million within our first few months.

Pat: But with great success comes great responsibility, right? Tell me about how you dealt with managing the community you suddenly found at your (virtual) doorstep.

Bill: That was a real challenge, but something that was really important to us to get right. I have frequently said that, being a Northwest company, one of my foremost goals is for us to live up to the legacy of the great customer-centric companies that have come from this region, like Costco, Amazon, and Nordstrom. Customer service is a company’s best opportunity to get to know its users and develop meaningful trust. So as we started to appreciate the amount of time and effort that’s required to keep thousands of users satisfied, we knew that it was going to become a full time job, so that’s when Mark was anointed the full time Community Overseer.

Pat: Tell me about your relationship with Mark and what he has been to Bonanzle.

The pairing of Mark and I is the sort of thing you read about in “How to Build a Business”-books. Our personalities are in many ways diametrically opposed, but in a perfectly compatible way for a business that requires multiple unrelated skillsets. Mark is patient, I am impatient. Mark is happy dealing with people for hours on end, I am happy dealing with computers for hours on end. Mark is content to be on an amazing ride, I am neverever satisfied with what we have and constantly looking forward.

Fortunately for us, there are also a few qualities that we have in common. We are both OK with constant chaos around us, which is assured in any startup (though I’d say that goes doubly for any community-driven startup). We both enjoy what we do, so we don’t mind 10-12 hour days 6 days per week (right, Mark? :)). And I think we both are generally pretty good at seeing things in other people’s shoes, painful though that sometimes can be when we can’t get something exactly the way we want it due to resource constraints.

Pat: I hear “community” as a common theme amongst your answers. Tell me about the makeup of the Bonanzle community and what their role has been in the building of Bonanzle.

Bill: Well I think it’s pretty obvious within a click or two of visiting the site that Bonanzle is the community. Almost all of the key features that differentiate us from other marketplaces revolve around letting the many talents of our community shine through. It starts from the home page, which is comprised of catchy groups of items curated by our numerous users with an eye for art/uniformity. Real time interaction, another Bonanzle cornerstone, relies on our sellers’ communication talents. And the traffic growth of the site has been largely driven by the efforts of our sellers to find innovative ways to get people engaged with Bonanzle: from writing to editors about us (it was actually the efforts of a single user that got us a feature story in Business Week), to organizing numerous on-site sales (Christmas in July, etc.) that drive buyers from far and wide.

I think that, from a management standpoint, it’s our responsibility to strive to keep out of the way of our sellers. In so doing, the embarrassment of riches we have in member talent can continue to build Bonanzle in ways that we’d have never even considered.

Pat: If you’re just joining us, I’m talking with Bill Harding, Founder of Bonanzle.com, about the experience of his first year of running Bonanzle. Please stay tuned — when we return, I’m going to talk to Bill about what he sees in today’s Bonanzle, and what he predicts for the future of Bonanzle. With any luck, I’ll even get him to answer the eternal question of which is tastier between pizza, nachos, and pie. But for now, a word from our sponsor:

Nice find, Arlene!