We ran into a problem today where a single row in one of our tables seemed to get stuck in a state where any query that tried to update it would hit our lock wait timeout of 50 seconds. I googled and googled to try to figure out a straightforward way to release this lock, but the closest thing I could find was a big assed Mysql page on table locks that lacked any specific solutions and this Stack Overflow post that suggests fixing a similar problem by dropping the entire table and re-importing it (uh, no thanks).
After some trial and error, I came up with two viable ways to track down and fix this problem.
The first way is to actually look at Mysql’s innodb status by logging into your Mysql server and running
show innodb status\G
This will list any known locks by Mysql and what it’s trying to do about them. In our case, the locked row did not show up in the innodb pool status, so instead I executed
This listed everything that currently had a connection open to Mysql, and how long it’s connection has been open for. In Rails it is a bit hard to spot which connection might be the one to blame, since every Rails instance leaves its connection open whether it is waiting for a transaction to complete or it is doing nothing. In today’s case, I happened to have a good hunch about which of the 50 connections might be the problem one (even though it was listed as being in the “sleep” state…strangely), so I killed it by restarting the server, and all was well. However, I could have also killed it using:
kill process [process id];
If you don’t happen to know which of your processes has done the lock, the only recourse I know would be to restart your servers and see which Mysql processes remain open after the servers have reset their connections. If a process stays connected when it’s parent has left, then it is your enemy, and it must be put down. Hope this methodology helps someone and/or my future self.