oh, CPAN, thou hast failed me
Apr. 3rd, 2009 05:11 pmLess cryptically, why isn't there a perl-usable (pure perl or just something with perl wrappers; don't care) distributed locking system that's worth a damn? What's so hard to understand about "locks should only EVER be given out to one client, and clients need to know instantly if their locks become unreliable?" that causes everyone to get it wrong?
IPC::Lock::Memcache: memcache is a caching system, and may drop your locks on the floor without telling you; also, netsplit will hose you, also, dead clients don't auto-unlock=useless.
IPC::Locker: single-server (not ideal, but fine -- although it lets clients specify a group of machines that DON'T TALK TO ONE ANOTHER, which is worse than useless unless you -want- a high likelyhood of false locks), handles dead clients well via timeouts and checking for pid existence (though if pid server isn't working, probably can timeout longrunning locks while they're working = bad), but locks don't hold connections open. Which means if, say, your lock server reboots, instead of what -should- happen (all your processes holding open locks get signals and can decide this means they have to relock or dump all their work on the ground, as they no longer, well, have an exclusive lock), things go on happily having lost your lock. Checking for a lock before every atomic stage of a critical section? Not good.
IPC::Lock::Memcache: memcache is a caching system, and may drop your locks on the floor without telling you; also, netsplit will hose you, also, dead clients don't auto-unlock=useless.
IPC::Locker: single-server (not ideal, but fine -- although it lets clients specify a group of machines that DON'T TALK TO ONE ANOTHER, which is worse than useless unless you -want- a high likelyhood of false locks), handles dead clients well via timeouts and checking for pid existence (though if pid server isn't working, probably can timeout longrunning locks while they're working = bad), but locks don't hold connections open. Which means if, say, your lock server reboots, instead of what -should- happen (all your processes holding open locks get signals and can decide this means they have to relock or dump all their work on the ground, as they no longer, well, have an exclusive lock), things go on happily having lost your lock. Checking for a lock before every atomic stage of a critical section? Not good.