Linux touchpad mini-update

A few tiny notes from the land of Linux touchpad improvements:

  1. If you’d like to ensure you’re notified when I have substantive updates on the touchpad driver, I’ve setup a subscription list here: https://tinyletter.com/inedibill. It will be used only for sending updates about the touchpad driver, so probably no more than one email every couplefew months
  2. Matt Mayfield has been doing hero’s work on ensuring that Linux touchpad driver can successfully discard thumb input. His commits to this end are in this branch. In using the libinput debug tool, it’s apparent that Matt’s branch is excellent at allowing a second finger to rest on the touchpad while still taking the active finger input as cursor movement, whereas the latest libinput release tends to interpret any cursor movement with an extra finger down as scrolling. We’re working to make it possible to download an installable version of Matt’s changes soon.
  3. I’ve begun the process of attempting to enumerate the acceleration differences between Linux and Mac here. If anyone else wants to take a stab at concisely describing how acceleration differs on macOS vs Linux touchpad, feel free to drop me a line at bill -at- staticobject.com and I’ll aim to incorporate your findings.

Hoping to have a more robust update on this in the next couple months, or when we get an installable driver available (if sooner).

What happens if two Google Adwords auto targets have the same bid?

The title of this blog came up recently as we continue to optimize our Google Shopping Adwords bidding tool, and I wanted to share my learnings with future web searchers.    Our situation is that we use Google Shopping PLAs to drive traffic to Bonanza, and our bid amounts are specified via Adwords labels that we apply to each item.  Most items will have multiple Adwords labels, where each Adwords label corresponds to an Adwords auto target.  My question to Google was:  when there is one product that has multiple Adwords labels, and those labels correspond to auto targets that have the same bid, which auto target gets credit for the impression (and subsequent click/conversion)?

The answer, straight from Google, makes a lot of sense:

So the one that enters the auction will be attributed with the clicks and impressions and this is dependent on the performance history associated with that auto target. The one with the stronger performance history – clicks and CTR attributed to it, will enter the auction and hence get the impressions.

Additionally if they are from different ad groups – the past ad group performance history and ad group level CTR would also matter.

Thus the answer:  whichever auto target performs best has the best chance of being shown in Google’s Adwords “auction” (the name they give to the process of choosing which Adword or GS products to show).

Rails 3 Performance: Abysmal to Good to Great

So you’ve upgraded from Rails 2.x to Rails 3 and you’re not happy with the response times you’re seeing? Or you are considering the upgrade, but don’t want to close your eyes and step into the great unknown without having some idea what to expect from a performance standpoint? Well, here’s what.

Performance in Rails 2.2

Prior to upgrading, this action (one of the most commonly called in our application) averaged 225-250 ms.

Rails 3.0.4 Day 1

One our first day after upgrading, we found the same unmodified action averaging 480ms. Not cool.

Thus began the investigative project. Since New Relic was not working with Rails 3 views this week (seriously. that’s a whole different story), I (sigh) headed into the production logs, which I was happy to discover actually broke out execution times by partial. But there seemed to be an annoyingly inconsistent blip every time we called this action, where one of the partials (which varied from action to action) would have would have something like 250-300 ms allocated to it.

I casually mentioned this annoyance to Mr. Awesome (aka Jordan), who speculated that it could have something to do with garbage collection. I’d heard of Ruby GC issues from time to time in the past, but never paid them much mind since I assumed that, since we were already using Ruby Enterprise Edition, the defaults would likely be fine enough. But given my lack of other options, I decided to start the investigation. That’s when I discovered this document from those otherworldly documenters at Phusion, describing the memory settings that “Twitter uses” (quoted because it is probably years old) to run their app. Running low on alternatives, I gave it a shot. Here were our results:

304 Twitter mem settings

New average time: 280ms. Now that is change we can believe in! Compared with the default REE, we were running more than 40% faster, practically back to our 2.2 levels. (Bored of reading this post and want to go implement these changes immediately yourself? This is a great blog describing how. Thanks, random internet dude with broken commenting system!)

That was good. But it inspired Jordan to start tracking our garbage collection time from directly within New Relic (I don’t have a link to help ya there, but Google it — I assure you it’s possible, and awesome), and we discovered that even with these changes, we were still spending a good 25-30% of our time collecting garbage in our app (an improvement over the 50-60% from when we initially launched, but still). I wondered if we could get rid of GC altogether by pushing our garbage collection to happen between requests rather than during them?

Because every director will tell you that audiences love a good cliffhanger, I’ll leave that question for the reader to consider. Hint: after exploring the possibility, our action is now faster in Rails 3 than it had been in Rails 2. It’s all about the garbage, baby.

Sphinx 0.9.9 Review, A Cautionary Tale

After my previous raves about Sphinx in general and Thinking Sphinx in particular, I was excited to get my hands on the new Sphinx 0.9.9 release that was finally made available at the beginning of December via the Sphinx Search site.

Given that our Sphinx usage is what I think would fall the “advanced cases” heading, I expected probably a day or two of upgrade headaches before we’d be back on track. Worth it, said I, for the potential to get working index merge, which could set the stage for indexes that happen more often than once every four hours (our current index takes about 3 hours, plus time to transfer files between the building Sphinx machine and the search daemon machines).

Alas, our upgrade did not go according to plan.

This Monkey Patches Going to Heaven

Given how prompt Pat Allen (creator of Thinking Sphinx) has been in addressing and fixing bugs in the past, I don’t doubt that many of our upgrade headaches from the TS side will probably be fixed soon (if not already, since I emailed him most of our issues). That said, we required about five monkey patches to get the most recent version of TS with 0.9.9 working the same as our previous TS with 0.9.8 did. The patches ranged from patching the total_entries method (if search can’t be completed it throws exception) to real time updates not working (via client#update), to searches that used passed a string to TS where it expected an int throwing an exception.

This does not include “expected” differences, such as the fact that search is now lazily evaluated, so if you previously wrapped your search statements in a begin-rescue block to catch possible errors, your paradigm needs to shift.

It also appears that the after_commit plugin bundled with TS 0.9.8 has been modified such that it does not available to models in our project by default. Never figured out a fix for that bug, since by the time I noticed it, I had also become aware of an even bigger 0.9.9 detriment: overall performance. Reviewing our New Relic stats since we updated to 0.9.9, we have found an across-the-board decrease of about 50% to our Sphinx calls. I parsed the Sphinx logs to try to ascertain if the slowness was spawning from Sphinx or TS, and it appears to be Sphinx as the main culprit.

Performance

TS 0.9.8

Considering 290227 searches.
Average time is 0.0318751770166445.
Median time is 0.005.
Top 10% average is 0.193613017710702 across 29022 queries

TS 0.9.9

Considering 843569 searches.
Average time is 0.0430074540435297.
Median time is 0.006.
Top 10% average is 0.286621461425376 across 84356 queries

Many of our queries take 0.00 or 0.01, so the median doesn’t look too much different between the two, but the average time (which is what New Relic picks up on) is 35% slower in Sphinx alone, and about 50% slower once all is said and done. An action on our site that does a Sphinx search for similar items (and nothing else) consistently averaged 200 ms for weeks before our upgrade, and has averaged almost exactly 300 ms for the week since the upgrade.

Conclusion: Proceed with Caution

Would be nice if I had more time to debug why this slowness has come about, but the bottom line for us is that, after spending about 3 days patching TS to get it to work in the first place, and with the “after_commit” anomaly still on our plate (not to mention overall memory usage increasing by about 20%), I have ultimately decided to return to TS 0.9.8 until such time that a release of Sphinx is made available that speaks directly to its performance compared to previous versions. I think the Sphinx team is doing a great job, but amongst juggling the numerous new features, it seems that performance testing relative to 0.9.8 didn’t make the final cut?

Or there could always be some terrible misconfiguration on our part. But given that we changed our configuration as little as possible in moving from 0.9.8->0.9.9, if we are screwing up, I would say it is for perfectly understandable reasons.

A three day window of a pure search action. First two days with TS 0.9.9 average 300, yesterday after reverting back to 0.9.8 about 200 ms
A three day window of a pure search action in our app. First two days with TS 0.9.9 average 300 ms, yesterday after reverting back to 0.9.8 about 200 ms

Javascript play sound (wav, MP3, etc) in one line

Alright, enough with the 10 page tutorials from 2006 describing detailed browser-specific implementation tricks to get sound playing. Google needs a refresh. In my experience in the 2010’s, here’s all you need to do to make a simple sound play in your web page:

Step one: add an element to your page

Step two: play a sound via that element

document.getElementById("sound_element").innerHTML= 
"";

Or, if you’re using jQuery:

$('#sound_element').html(
"");

I’ve tried it in Firefox, Chrome and IE and it works like a charm for me. I’d imagine that your user has to have some basic sound software installed in their computer, but at this point, I’d reckon 99% of users do.

Feel free to add to the comments if you find any embellishments necessary to get this working.

Can’t open file in Rubymine?

Just wasted an hour fighting a bug in Rubymine 1.1.1 that wouldn’t let me open files. Every once in awhile, it just decides that it doesn’t want to open files of a certain type (in my case, all .html.erb and .rhtml files), regardless of how you try to open them (click in file browser, choose File -> Open, etc).

And now, ladies and gentlemen, without further ado, the fix:

File -> Invalidate caches
Close and re-open Rubymine

Sigh.

Ubuntu/Nautilus Script to Upload to Application Servers

Here’s a fun one. If you’re running Nautilus (which you probably are, if you’re running vanilla Ubuntu), this will allow you to copy one or more assets to one or more app servers with a right click.

Firstly, you need to open the script:

vi ~/.gnome2/nautilus-scripts/"Upload to app servers"

Then paste:


file_count=0
dir="`echo $NAUTILUS_SCRIPT_CURRENT_URI | sed 's/file:\/\///'`"
for arg
do
full_file_or_dir="$dir/$arg"
relative_file_or_dir="`echo "$dir/$arg" | sed 's/\/path\/to\/local\/root\/directory\///'`"
rsync $full_file_or_dir loginuser@appserver.com:/path/to/remote/root/directory/$relative_file_or_dir
# Add other app servers here if you want...
file_count=$((file_count+1))
done
zenity --info --text "Uploaded $file_count files to app servers"

After saving the above example, when you right clicked a file in Nautilus, you’ll see a “Scripts” menu option, which, when expanded, would have an option called “Upload to app servers”.

Upon clicking that script, the relative path of the file in your /path/to/local/root/directory will be copied to the corresponding location in /path/to/remote/root/directory on your appserver.com server. You can select multiple files and upload all at once. After uploading, a zenity message will pop up telling you how many files were successfully copied (requires the zenity package, apt-get install zenity).

So for example, if you right clicked the file /path/to/local/root/directory/myfile.txt, and picked “Upload to app servers,” the file would automagically be copied to appserver.com as /path/to/remote/root/directory/myfile.txt.

This script has proved very helpful for uploading post-deploy fixes to multiple app servers without redeploying the whole lot of them.