More efficient than using garbage collection, if you're working with a language where you can operate in an unmanaged mode (barring FFI). I do expect this gap to start closing based on some literature some friends have shared with me...
With most managed languages I've played with, I haven't found a compelling reason to use RCU-like mechanisms (except if it involves some FFI boundary or embedded language in unmanaged system).
What you're doing sounds like a seqlock pattern (see ck_sequence) or generation counter, not RCU. Fundamental to all RCU implementations is "grace period detection" (detection of the point in which it is safe to free memory) occurring without any read-side delays. With RCU, a reader never has to retry due to RCU itself.
RCU is really special in an unmanaged language, but I'm not sure it's that interesting for developers working in a managed language (unless they happen to be solving safe memory reclamation).
The optimistic locking I mentioned is what we use when RCU is inappropriate (due to multiple pointers).
But just to get the definitions straight: what you call RCU is a copy-on-write combined with a GC algorithm. What would you call a copy-on-write (with no synchronization on the reader's end whatsoever) that uses a general-purpose GC?
Also, I don't understand how hazard pointer GC can be more efficient than a general-purpose GC, given that modern GCs work based on the same principle, only much more refined (e.g. HotSpot creates a stack map containing the location of each reference in every thread's stack instead of a hazard pointer list). Of course compacting collectors (usually young-generation) don't do any reclamation work at all, instead they only work to copy the live objects, so it all comes down to the question what do you have more, live objects or dead objects? But in any case, simple hazard pointer mechanisms seem like a crude, rather old, non-compacting GC technique. I don't see how they can beat a good general-purpose GC.
With most managed languages I've played with, I haven't found a compelling reason to use RCU-like mechanisms (except if it involves some FFI boundary or embedded language in unmanaged system).
What you're doing sounds like a seqlock pattern (see ck_sequence) or generation counter, not RCU. Fundamental to all RCU implementations is "grace period detection" (detection of the point in which it is safe to free memory) occurring without any read-side delays. With RCU, a reader never has to retry due to RCU itself.
RCU is really special in an unmanaged language, but I'm not sure it's that interesting for developers working in a managed language (unless they happen to be solving safe memory reclamation).