We're using eventuate[0], which is an event-sourcing framework with deep support for cooperation via shared logs. It's based on the actor framework akka; akka itself has akka-persistence[1], which is similar but different[2]. All of these techs are usable right now.
Though it doesn't feature either implementation (he does something similar on top of Samza), I like this article[3] on the topic: turning the database inside out really is what we're doing.
We have built an implementation of CORFU [1] (the protocol Tango is based on) that runs on Ceph/RADOS, called ZLog [0]. We have a very simple prototype of Tango running ZLog. ZLog could run on other storage systems like Kafka but we have only focused on Ceph/RADOS as the underlying storage.
One of the more eye opening aspects of the paper is just how little code it took them to duplicate the Zookeeper API atop Tango. Granted there are some caveats about a research project vs an industry ready codebase, but I still interpret it as strong evidence that their approach is a good foundational abstraction.
Why need a shared log? Remember the CAP theorem. No need for these bottlenecks. If you want to store that A happened after B, just have A store a (hash of) B.
That's a type of logical clock you're describing (without a partial order over all events, just 2 events). Obviously, if you do that with all events, you will have a logical clock. The hash of the previous event is not a good logical clock, as you cannot define higher level operations over the values, such as - is this event 'newer' than this other event.