You speak prescriptively of a canonical form, but I do not know what canon do you refer to. What do you define as a "page"? Are you possibly arbitrarily drawing a line at a HTTP GET request between the article and the edit button? If that's the case, Google Docs and Etherpad would fail to meet your definition of a page/document. Right now we see a declining rate of collaboration on Wikipedia, so it's natural that the evolution of the wiki would do more to encourage editing/forking: http://mashable.com/2013/01/08/wikipedia-losing-editors/
> Google Docs and Etherpad would fail to meet your definition of a page/document.
Indeed. They don't feel like pages or documents (indeed google docs is explicitly an editor and has an export step to produce a webpage), and I would not want a wiki (or wiki-like project) to be maintained in them. They serve a different niche.
> Right now we see a declining rate of collaboration on Wikipedia
For well-know reasons of deletionism and unnecessary barriers to new editors (see http://www.gwern.net/In%20Defense%20Of%20Inclusionism ). The way to fix wikipedia isn't to keep changing things, it's to go back to the policies that worked in wikipedia's golden age.
Speaking for me, it has nothing to do with philosophical principles like "keeping the web as a web of hyperlinked documents" or to use html and http as intended. Rather, it's using the right tool for the problem. Google Docs and Etherpad are word processors first and then document collections. A wiki is a document collection with the ability to edit. For a document collection, it's important that you can simply download, copy and crawl the documents. Of course this is possible with a SPA, but now your solving problems for your tool instead of the other way round.
Wikipedia is a document collection that happens to be implemented and edited using a wiki. Wikis are much more general tools.
There are many "document collections" that are nearly impossible to crawl. Every have to use PACER or many other databases of scanned PDFs? Many web pages require you to reverse engineer HTML to gain a meaningful sense of structure, where Smallest Federated Wiki has APIs and is made in every way to be copied.