A more elegant (IMHO) implementation of the q-list has the second pointer point not at the tail, but rather at either the last next pointer or at the head pointer. This makes your appending logic simpler by not having to deal with the special case of an empty list.
A similar trick can be used with hash tables where the buckets are s-lists. You still get O(1) deletion within a bucket if your hash table returns the address of the pointer that refers to the element, rather than the element itself; and you save one whole pointer per element by doing so.
Using the author's terminology: the last two, the h-list and the d-list, ought to have O(1) insertion anywhere too. Since he calls removal at any position O(1) he must already have a pointer to the desired item. Also, this implies that insertion anywhere is O(1) in all four list types.
Yeah, I was confused by this. Why is deletion not still O(n)? The back pointer doesn't gain you anything, because you can just drag along a temporary "last next" pointer when you're scanning to find the element to delete, which you have to do either way, so the s-list is asymptotically no slower at deletion--- O(n) to find the element to delete, and O(1) to delete it.
My post was in response to the thread above -- the bottom line is that for all of the linked lists you mention insertion/deletion is O(n) in the general case and O(1) at the head. The way its currently written is confusing and wrong.
Should also consider the cases with zero head pointers.
For singly linked lists, a pointer to any node gives you the downstream part of the list, but not the upstream. (Can't think of any non-contrived, simple examples.)
For doubly linked lists, each node is effectively a head/tail. The list might be circular or might not. (Lots of kernel data structures where list membership is not the primary index.)
h-list is only advantageous for hash tables if the links are in-element. Generalized hash tables would probably use s-list. But I like the terminology.