Hacker News new | past | comments | ask | show | jobs | submit login
A complete rewrite of wiki as a single page application (c2.com)
161 points by chrisdotcode on Feb 2, 2015 | hide | past | favorite | 119 comments



I believe the wiki is among the most amazing things to come out of the web (and think Cunningham is a genius for inventing it, or discovering it, or whatever process made it happen), and I can't count the number of times I've lost an hour or two spelunking into the C2 wiki.

But, I'm having a hard time making sense of this UI. The panels popping up seemingly infinitely into the right side of the screen (every click makes a new panel) is unfamiliar and doesn't seem to provide a compelling value over giving the content a valuable piece of center page real estate. It feels like it is too proud of its cleverness, and that detracts from its value.

And, some other nitpicks: The "fork this page" feature uses a flag icon...which means "flag this" to any one who has ever seen a flag. The flashing icons at the bottom of every page, representing every edit throughout history, is hella distracting, and again makes the content play second fiddle to something else going on. There is no way to discover this thing in anything resembling an ordered fashion. Most of the links on the "front" page (I guess?) are dead ends, recent changes provides pages that have content but provide a haphazard approach to understanding what I'm looking at. The neat thing about a wiki is historically that you start in one place, and can follow links back and forth until you grasp your subject. In this case, the links seem to have taken a backseat to the other stuff (which is made up of flashing things, poorly chosen icons, and weird gradient squares that seem to exist for no reason), leaving me hanging here with no idea what this thing is for and how it works.

It pains me to say it, as c2 is wonderful (a little dated and simplistic, sure, but the world's first of anything doesn't need to be beautiful). Maybe it'll come to make more sense with time and further contribution from people who know what it is and why it is.


I'm both sad and enjoyed by the move. I'm attached to the deceiptively poor old wiki, because it contrasts with the great content quality. At the same time, they're migrating toward something off the map of current technologies, and so the spirit of the first wikiwikiweb is still alive.

ps: if someone has an archive / mirror of c2.com, I'll be happy to leech.


"deceiptively poor old wiki, because it contrasts with the great content quality"

Does that remind anyone else of HN? I sort of rather like the lack of features on HN in the same way that I like c2.com.


Yeah, I feel that too (though HN is a bit too bare-bones for me; I think of Reddit as being the golden UI standard of this type of site). I do believe the more superfluous JS you have on page the less likely it is that your content is any good.



Oh, thanks a lot, didn't know archive.org had a system in place to download archived mirrors. 40 well used Megabytes.


It's not by me and not from the wayback machine (might be integrated though), just hosted there. A crawl was requested in the archiveteam IRC channel a while ago and someone did it.


It's all still there: http://c2.com/cgi/wiki

I assume they'll move it over to the Federated wiki at some point, but you can still view everything on the old wiki.


These are all UI complaints and all fixable by anyone with a better idea and JS chops whether Ward likes the particular changes or not.


Yes, I believe the UI is the biggest failing, though I am also not seeing a clarity of vision, which wiki very clearly had (making it absurdly easy to create hypertext documentation for anything). Federated wiki seems to be about "federation", or individual wikis that can link amongst each other as readily as one can link between documents in a single wiki. That's a great idea. Federation may be the only thing that can save us from the walled garden age of Facebook, Google+, Github, etc. (or maybe it can't...the UI is gonna have to be better than this if it is).

So, I agree that it's biggest problem is UI (and that's what I listed above as being wholly wrong about this thing), but I think it's dangerous to underestimate how much design determines the impact of an idea. I've seen lots of really cool ideas come and go, with bad design as the reason it never takes off. And, it's probably a bad idea to underestimate how hard it can be to fix a bad design, if the design is intimately tied to the implementation. I don't know if that's the case here, though there's certainly a lot of bad design ideas that seem to be important to the developers; at least, they must be important to the developers because they made them flash incessantly.

I've poked around some more since posting my initial comment, and I really can't get over how very distracting those flashing icons are, and can't imagine how anyone on the team is OK with it. How can designers of a content system (which a wiki is, at its core) do so much to make the content seem like the least important thing on the page? The user is literally directed to pay attention to everything but the content.

Again, I say all of this with an incredible amount of respect for Ward and the wiki. But, this thing is a mess.


You make good points. I guess I'm more hopeful that the UI is fungible if the core ideas shine through.

I'll be most interested to see if they figure out a way to make federation easier to understand and use. At first I thought I wanted a globally editable central wiki with some company stuff and links out to individual wikis for project notes, developer knowledge base, and journals. It took a while for it to sink in that only the owner could edit the first one and really everyone needs their own to do anything. It's really hard to shift the mindset from shared state to single owner and fork to like, fork to change.


Wow, that's some bad usability there. Talk about chasing the shiny over form and function. Javascript pages are the 'Flash site' of the 2010s and I can't wait until it's similarly consigned to its rightful place in the dustbin of history. All they do is waste my CPU time and memory, force me to enable a scripting language that is riddled with vulnerabilities just to see some content, and run more slowly than real pages would.


Lets look, how this new wiki behaves without JS, from the viewpoint of a search engine.

e.g. compare http://c2.fed.wiki.org/methodology-subsets.html with and without javascript. The new wiki is doomed by design, because content can not be found by Google or any other search engine. And a wiki where the content can not be found is for the trashcan.


It has also joined the ranks of sites that don't work at all without cookies. Go ahead, try it.


This can be solved using any of these methods: https://developers.google.com/webmasters/ajax-crawling/docs/...

I maintain a client side rendered website and while SEO can be problematic it can also be done.

I don't see C2 using any of these methods at first glance tho.


It can more easily be solved by just not crawling sites that have mandatory javascript, because chances are their content is going to be crap and not worth the time if they are that hostile to basic usability.


This is a specific issue to be solved with standard REST techniques. Do you know if the team is working on it? Maybe it's on a timeline, scheduled to be implemented next month? I haven't looked into it, but the current lack of this functionality does not mean that the "new wiki is doomed by design."


Google executes JavaScript.


That can impossibly be true. JS is turing complete, turing completeness means that the halting problem is undecidable, which means that at best Google will execute JS with some random resource limit to avoid infinite resource consumption, which in turn means that whether Google will actually see the content is ultimately undefined. And even worse: other search engines might, due to lack of standardization of the available execution resources, see a different picture. That's just a braindead model for information storage and exchange.


You're trying to be too intelligent for your own good.

Fact of the matter is the new wiki UI is horrible. It doesn't have to be spelled out any more complicated than that.


Everything you said applies to humans using normal web browsers as well. And yet people still fill pages with JS, and browsers still execute it, and people still see stuff.


With one minor difference: Humans at least have full-blown human intelligence which they can use to heuristically determine when the execution is "complete". Not that that makes all that much more sense ...


You seem to be saying that page load is not deterministic but that is not true. A special version of a browser can easily keep track of http requests and other async operations and consider the page to be complete when async ops = 0 and the last paint is complete. The only hard part I can think of is a page that is recursively calling setTimeout endlessly, but even that can be coded around.


No, it cannot, that is called the halting problem.


Right, so like you said they have a reasonable cut-off for this one most likely ultra-rare situation. It's not going to be perfect but it doesn't have to be, in order to be useful.


It doesn't matter if the halting problem is solvable or not, there would exist a cut-off anyways (even if they had a super-Turing machine).


Yeah, that does make it better, but it's still not perfect. Witness the complaints right here about how you can't tell whether you got an empty page or it just hasn't finished loading yet.

Trouble awaits when you try to wedge a general-purpose application environment into a page-based document viewer.


I doubt Google is worried about unfairly weighting pages it indexes with Javascript that takes so long to run, hence the cutoff. It's more a question of where to place the cutoff than whether to have one at all, which is absurd.


No, the question is neither of those. The question is how to define the cutoff in a portable way that enables interoperability and long-term stability. A document format where every implementation has its own secret cutoff that also probably changes all the time is just idiotic if your goal is interoperability.


The exact same argument could apply to pages that take forever to load, because the web server sends you a chunk of bytes every two seconds. Would it be braindead for Google to apply an arbitrary timeout without standardization?


Nope, but it would be braindead for google to then just index however many bytes it has received up to that point as if it were the full document.


OK, so we have a pretty straightforward rule. Providing no further input to the program once it begins,

1) If execution terminates by some timeout T, where T is at least several seconds, then index that.

2) If execution has not yet terminated by T, whether or not we have any idea whether it will terminate in the future, don't index.

Tune T so that it will get the vast majority of reasonable web pages. (Hypothesize, and test, T = 5 seconds or 30 seconds or something. Have a human look at the highest-PageRanked timeouts to figure out what's going on.)

This applies whether "the program" is server-side or client-side. The halting problem cannot be reduced to the timeout problem (since the halting problem asks if it ever terminates), and the timeout problem is pretty clearly computable.


I don't know what that was all about, but Google does, here's a blog post about it:

http://googlewebmastercentral.blogspot.com/2014/05/understan...

And webmaster tools has a tool to verify that content is scraped correctly.

Not saying there aren't other good reasons to do server-side rendering, but SEO is not a real one.


just to see some content

Moreover, static content. There's a big difference between using JS to do something interactive and useful which couldn't be done before (e.g. various games, calculators, and visualisations get this part right), and using it to incompletely emulate basic functionality that web browsers already have.

the 'Flash site' of the 2010s

I like that analogy - stuffing the whole site into Flash was a horrible idea, yet games and other useful interactivity (minus the annoying ads...) weren't so bad. Now the trend seems to be putting the whole site into JS, and while it could be argued that JS is more open than Flash, the end-result is just as unnecessarily wasteful of resources and inaccessible in the ways that the web was originally envisioned.


It's not static. It's an editable wiki.


It's not (necessarily) dynamic, in the sense of DHTML, either. While I'm reading a page, I don't need to watch the page change. At best you could notify me that the page has changed, but I don't think I'll really get anything out of that.

There probably are environments where live collaborative (Etherpad/Google Docs/etc. style) editing of communal content is useful, but I suspect that you're looking at "documentation hackathon" or something. For a project that's had useful content for two decades, the changes made in the last two minutes are not why I'm there.

An analogy could be made to Hacker News (or Reddit, or...) comments. While you could pretty easily build a system that does live chat, given that live chat is pretty much everyone's first project in a real-time framework, I think it's important to the nature of the site that changing comments don't show up immediately, because it gets you longer-form comments and things that are more interesting to read for someone who comes later. The process of reading IRC scrollback, when you weren't involved in the conversation, is not particularly enjoyable.


The process of reading IRC scrollback, when you weren't involved in the conversation, is not particularly enjoyable.

It is interesting how true that is. If you're there in real-time, the timing of the various messages and interactivity provides a lot of context to what is being said. You're also not reading everything all at once in a blast of data, instead you have time to read and consider each one appropriately.


But you still have to render to HTML at some point. Wikipedia spends a huge amount of server resources rendering the last version of every page to HTML and caching it. It throws away older versions and has to render them again when you request them. The NYT ran into performance issues managing millions of articles and switched to client-side rendering even for old static content.


As cheap as S3 is, I'd suspect that it'd be cost-effective to just render the core HTML (i.e., that stuff minus the date, time, user-specific bits) of a page once, throw it there and forget about it. I can't get a good read on it from http://dumps.wikimedia.org/enwiki/20150112/ ('cause I'm too lazy), but it looks like it probably would be pretty cheap, given what those files probably expand to.


Why is the usability bad? Pages are pleasant to read in that aspect ratio and it seems to provide a good solution to what to do with the bits around the edges. I like the stack of places that I've been before as context and think that it encourages exploration and clicking around more than the original site.


A website's job is serving up pieces of information. Helping me navigate that information is the browser's job. That's why we have amazing innovations like the back button, tabbed browsing, and bookmarks. Imagine a world where every website had its own buggy implementation of those.

Also, simplicity of core architecture leads to diversity of tools. If Google's first crawler had to execute JavaScript and make sense of single page applications, Google wouldn't exist today. So be careful when you encourage websites to become more complicated.

Tim Bray said it well in 2003: "Browsers are more usable because they're less flexible." https://www.tbray.org/ongoing/When/200x/2003/07/12/WebsThePl...


> That's why we have amazing innovations like the back button, tabbed browsing, and bookmarks. Imagine a world where every website had its own buggy implementation of those.

Heh. We had a requirement to put in a back button. We put it in. Since no decent criteria were available to specify how it behaved, we just called history.back().

They did some user testing. Then they asked us to take it out again.

Never have I enjoyed taking on a ticket like I enjoyed taking out that stupid bloody back button.


I'd actually prefer the panes to be a bit wider, but that's not that important.

What I find weird is that I can go 2 layers back on the same page, but then I suddenly have to use the browser's back button to go further up. I have to use the back button to close the right-most pane. And sometimes the back button does not work, or I have to use it multiple times to get "one step" back. Loading times feel worse, because you stare at a blank rectangle and wonder "is this article empty" and then it suddenly pops in.

I accidentally dragged a paragraph around and had to go back to the front page and follow a link there to remove the local copy I created. I also don't know how I would publish that copy if it were an intentional edit.

I'm sure it makes sense once you've understood it, but discoverability seems bad. Despite all its flaws TiddlyWiki seems to get the basic SPA wiki UI "more right" to me.


Delayed display of text is terribly bad. Consider the following scenario: 1) I click a link to "Decorator Pattern". 2) A new panel with the title and no text appears. 3) I assume a major malfunction. 4) The text of the page deigns to appear.

Then I tried clicking on "Category: Pattern" and page text took about a minute, not a few seconds.

Regarding the "stack" of previously read pages, it offers a typical stack behaviour: it can be blown away easily and efficiently. You can forget large pieces of history by the apparently harmless interaction of clicking on links in a previous pane.


That sounds like server load, more than anything on the client side. When I used it everything was instant and snappy, so it might just depend on the number of visitors.

Blowing away the stack by clicking is a bit of a pain. A couple of times something buggy happened where new pages overwrote some part of the old stack, but left the rest of it in place. I would like to see some option for expanding multiple branches at one level, but at that point it is probably a UI specifically for me rather than something that is generally usable.

One thing that was cute was that the stack of pages is exposed in the URL, which suggests that some nice greasemonkey tricks would be viable.


"That sounds like server load, more than anything on the client side."

You are missing the point: with a normal page, server load looks like server load, whereas with "dynamic" tricks server load looks like a defective page and what you see cannot be considered the true content of the page.


There's no reason the page can't display some sort of progress indicator while it waits for data. If it doesn't actually do this here then that's silly, but that's a problem with the implementation, not the concept. I'm seeing more and more apps that perform potentially long-running tasks (usually involving network access) without any indication that anything is happening. It's frustrating.


You're commenting on a topic that the majority of people here have probably debated and thought about at great length. I can't help you think that calls for a slightly more nuanced argument than the one above.


no argument is being made, stop being po-faced.

I agree with his sentiments and wish the internet were otherwise. I don't care how thoroughly brow beaten everyone is about this, I want it recorded that this is dumb and unnecessary.


> I want it recorded that this is dumb and unnecessary.

But it's not that simple. Some uses of javascript genuinely add value - even to content sites. The real discussion is about cost vs benefit not "All interactivity is bad".

Shades of grey, dear sir. Shades of grey...


But we do need some (I'd say, a lot) back-pressure against laziness and fashion that creates that unneccessary JavaScript.

Being JS-heavy is something I found to be correlated with poor/untrustworthy content and someone trying to make money off you. I'm of course not talking about web applications here (like GMail or Google Docs) but web pages, which after all "should be text communicating a fucking message"[0].

[0] - http://motherfuckingwebsite.com/


I don't disagree at all. I'd probably agree with the statement that most sites would be better without most of their 'enhancements'.

I just don't want the HN party line to become a knee-jerk "all js on content sites is bad".

I'm a big fan of pjax etc as it can really speed up page load without any downside (assuming it's been implemented well). If the UX is good, performance are as fast or faster as a plain HTML site, everything is bookmarkable and you don't break my back-button then go crazy.

And on the whole I actually like the new c2 wiki.


I've yet to see any. It used to be necessary for drop down menus, but now CSS does that. When I browse with javascript off, it's only crap sites that break, and often the same ones loaded down with tracking cookies, 3rd party CDNs, facebook/google plus/twitter buttons, popups and whatever other crap. Interesting, that...


The original wiki was a wonderful community, alive in that ineffable way that very few online communities are, totally free and public. I wrote a blog post [0] about how strongly it influenced me as a person and the gratitude I feel for it.

Ward Cunningham is a respected and intelligent software designer who works on open projects for the public good. The immediate cynicism and dismissal I've seen of his new project so far makes me sad.

Why not look into what he's trying to do? [1]

Why not try to learn something about how it works, rather than look at it for twenty seconds and then complain?

[0]: http://swa.sh/the-original-wiki/

[1]: https://www.youtube.com/watch?v=BdwLczSgvcc


Because first impressions are important.

If you see a car with a small knob for a steering wheel, you'll find that it is more difficult to use. And even though the engine of the car might be a radical improvement, this still doesn't mean that it can be used, mainly due to the pesky knob instead of a steering wheel. Obviously, after a while of using a knob to steer, you can get used to it, but that initial lack of usability means that many people decide that it's not worth the trouble.

Same with the website. You go to a website to use it. Sure, the idea behind the website may be wonderful and innovative, but if the access to it is unusable, then the innovation behind it all is for naught. When implemented well, the innovation is both shown in the explanation of it, and in the interface itself.

Everybody is dismissing it after 20 seconds because it doesn't work. The idea may be good, but the implementation is what is being shown to public. If it were the idea that were being shared, then it would be a link to, for example, a blog post or source code.

Finally, the reason that we're not trying to "learn something about how it works" is because it doesn't work, at least not yet. Which is why we are complaining.


I have some fundamental disagreements with "first impressions are important" when it comes to public software.

First impressions are unreliable when it comes to understanding the value and potential of something that's in development.

The whole wiki spirit is to release early, adapt to feedback, and encourage collaboration.

When you say "it doesn't work" and "the innovation behind it is all for naught," I feel somewhat dejected.


That the rebuilt C2 wiki doesn't work is a fact; the problems are, unfortunately, so obvious and so serious that the first impression is uncommonly bad, there is no need to look further.

The "value" of the old site is lost, the "potential" for improvement doesn't matter and doesn't exist, the "innovation" is a failed experiment that shouldn't have been "released early".

I just don't understand this leniency towards a bad implementation of a bad idea.


Is it the whole concept of a federated wiki that you think is a "bad idea"?

When you say that the "'potential' for improvement doesn't matter," what exactly do you mean?

Doesn't matter to whom? For what reason?

I am lenient and curious about most attempts at innovation in the field of web-based communication and collaboration.

Personally speaking, I like Ward Cunningham; I admire his previous work; I am interested in federation; and I generally encourage open source development of interesting communication tools.

When you say "the 'innovation' is a failed experiment," I read that as a claim that hopefully—and with effort—will turn out to be mistaken.

If you think there is nothing to learn from it, that's up to you.


A federated wiki consists of three main parts: a federated database of content (which is supposed to be the interesting part), a user interface for reading that content, and a fairly different but unavoidably related user interface for editing and administration (both likely to resemble their counterparts in a non-federated wiki, but a bit more complex because of the richer information model).

Of these three components, all criticism of the C2 rewrite is focused only on the most accessible: the reading user interface, which is blighted by the fundamental bad idea of imposing a bizarre, dysfunctional SPA gateway on one of the most pure examples of hypertext in existence. Only this user interface is an impractical, grossly failed experiment; it's obvious that pages like those in the old C2 wiki, possibly with a few extra buttons and links to deal with federation-related metadata and features, would have been a far superior user interface.

Nobody complains about the idea of a federated wiki (either in general or referring this particular design) because, with the ugly bugs and bad user experience, it's simply irrelevant; even the editing user interface is practically hidden behind a wall of inconvenience and mostly ignored in comments.

Personally, I think federated wikis are a promising organization for the public web, but they won't be like this.

Actual software and sites, particularly when they replace a very good predecessor like in this case, should be judged by their actual quality, not by enthusiasm levels or fantasies about the future. As a production wiki, the C2 replacement has been published by mistake and it should be reverted ASAP and killed with fire, but as a research testbed it deserves rework and further experimentation: with a good user interface, which remains to be determined, people would be able to exercise the underlying federated wiki database, which I suspect to be good.


That seems like a more reasonable position.

However:

(1) The federated wiki has not replaced C2. Ward has put up a notice saying that he plans to do so at some unspecified time in the future. Right?

(2) You claimed earlier that "the 'potential' for improvement doesn't matter and doesn't exist," which I point out is excessively negative and dejecting.

(3) Wiki has always been a research testbed and a place for experimentation. That's why the whole thing is done in public in such a way that any interested person may participate and give input. To help such projects, if one has any reason to believe that they may be valuable—as you now seem to agree—it is more productive to give constructive feedback through appropriate channels.


I still stand by what I say that "first impressions are important". Perhaps they are less emphasized in public software, however, they still count for quite a lot (as is seen by reading the comments). Technically, your first impression was of the idea that it was trying to convey rather than the application itself that was linked to. To others, it was the application and not the idea.

I guess when I did say "it doesn't work" that was a bit judgemental, sorry. I meant that it was incomplete, or at least unintuitive and/or cluttered to use. In the version that was given to everybody to try out, that is.

First impressions may be unreliable, but it's how the world works, most of the time, and unfortunately changing human nature is quite difficult (although I haven't really tried :P).


I am not really disagreeing with you, but I would like to point something out: if this UI becomes widely used, then the cost of getting used to it is not so important. BTW, this is why I like Bootstrap: my web sites may look similar to millions of other sites, but no one is likely to have problems using it.


True. Let's wait and find out if this catches on. :)


Take a look at a movie from the 1990s, where a character pulls up say a news article on the internet. Just text and some pictures. So usable! The web peaked in 1998, when everything was plain HTML but you could Google-search it, and everything since then has been the moral progeny of the <blink> tag. So when a perfectly usable site like C2 embraces the bankruptcy of the Web 2.0 era, I think people are justified in expressing their frustration.


The web peaked in 1998?

That's an interesting claim to make on a forum that frequently discusses new web technology, the progress in browser engines, the interesting new stuff that's being made.

In 1998 there was no Gmail, no Github, no Slack, no Discourse, and so on and so on.

C2 may have been perfectly usable, and I think it should remain as an archive, but there was barely any contribution anymore.

When you imply that Ward's "Smallest Federated Wiki" embraces the "bankrupty of the Web 2.0 era," it sounds to me like your pet peeve against JavaScript makes you discount the whole project.

I mean, you're very free to have your own views and opinions, but as someone who's very interested in this project, I'm pretty baffled by all the negativity, and the lack of a more than superficial interest.

The idea of a federated wiki is, it seems to me, very cool, and the fact that it hasn't been done yet indicates that the web has not peaked. So where you see "the bankruptcy of the Web 2.0 era," I see a fascinating development of the wiki idea by its original creator.

Yeah, the JS app seems a little unpolished, but also pretty cool in many ways. I don't know whether it's the best move to replace the old C2 wiki right now. Maybe Ward sees it as a way to get more people interested in the new project. I don't know.

As for me, I'm going to see if there's any way I can contribute and help the project. It would be nice to see some more of that attitude here, instead of only (in my view) superficial complaints.


> That's an interesting claim to make on a forum that frequently discusses new web technology

I don't like to shy away from controversy. ;)

> In 1998 there was no Gmail, no Github, no Slack, no Discourse, and so on and so on.

No, but there was SMTP/IMAP/Exchange, which was better and didn't spy on you. There was ViewCVS, which was just peachy, and in any case the improvements in Github over CVS have to do with git versus CVS (i.e. the native apps and protocols), not the web frontend per se. There was also IRC and AIM, which were just peachy too.

It may well be that the 1998 peak was a local maximum, but I stand by the assertion that the web was better back then. The modern web is spectacularly bad for finding and consuming information. Want a simple how-to or product review? You'll be blasted in the face with images, video, and animation until your quad-core 2GHz/16GB RAM machine falls to its knees.

And to what end? Text is still, after thousands of years, the most efficient way to convey information. Anything you layer on top of it detracts, rather than adds.


SMTP/IMAP isn't better than webmail. IMAP is an awful protocol. It's less secure than HTTPS. It doesn't do a better job of avoiding "spying" than webmail does. You can sync multiple clients with IMAP, but webmail accomplishes the same task so seamlessly clients don't even need to be aware of it.

IMAP is a pretty good example of the kind of protocol that should be eaten by HTTP/RPC APIs.

(I've written a couple IMAP implementations, both client and server, and was [in the 1990s] responsible for the mail infrastructure for a popular ISP.)


I think we agree on a lot of things.

I also think Ward's project is not fundamentally bad just because it uses JavaScript.

I think the idea of federated text sharing through free software and open protocols is lovely and that anyone making a serious attempt at this should be encouraged.

And I think that interactive web software can actually help, especially when it comes to inviting people who wouldn't enjoy setting up a Usenet client.


The web is better now because there are more people using it. All the things that you dislike were requirements to onboarding all those people.


Fair enough, but I think increased usage is more the result of a confluence of more widely-available broadband (including mobile), as well as new applications that would've been possible without rich media. Garnish aside, Facebook and Twitter are slinging a few textual quips and images around.

Also, I'd quibble about the word "better." Does the vast increase in non-technical users make the world better? Sure. Does it make the web better?


We didn't need half those things, because we had email, usenet, and IRC. Those were actually pretty good. I certainly had far better discussions on usenet back then than i've ever had on the web since.


Right on.

I recently deployed a fedwiki farm for my team after playing with it myself and catching just a whiff of the possibilities. I had a litany of complaints myself, but was compelled to press on due to the intriguing promise of the goals of the project.

While some issues stem from the fundamental design of tracking individual paragraphs, I believe many of the problems will be designed away with better client side programming or alternative UIs in time. With this rollout of fedwiki on the original wiki, I now think all the issues will be defended or fixed sooner than I thought.


Good to hear that someone else is interested!

Could you explain a bit more about the issues with tracking paragraphs? It sounds like an interesting idea.

From perusing the GitHub documents briefly, it seems like the architecture is such that other UIs should be possible. Do you have a hunch about the possibility of making, say, an Emacs client?


Because of drag/drop refactoring and the detail in the page journal, they have to track the history of every paragraph, which requires UI for interacting with individual paragraphs. You can't have just a big old textbox for the whole page and do things as we always have (maybe you can get close with the right diff algorithm).

So to edit a paragraph in the current UI you have to double click to get an edit box, then click to insert the cursor in the right spot. Adding a new paragraph requires a few clicks. Adding a new page and getting to that first edit box requires too many clicks.

Drag/drop as the primary mouse interaction also makes it hard to copy/paste text in and out of the wiki.

It's painful if you're used to orgmode.

But like I said, these are just UI gripes. You can see how they fall out of the fundamental design of the system, but I think focused client design could optimize editing and make it more familiar if that was a goal.


I started playing with it last night. It looks like the was a significant fork a few months back that separates the client from the server. Conceivably, you could take the client and figure out how to build a different UI.


> Why not look into what he's trying to do?

I did. It's literally unusable. I.e., one literally cannot use it in lynx, w3m, Firefox with JavaScript disabled.

As far as I can tell he took 20 years of work and threw it into a garbage disposal.


It's unusable for you and at this point in time.

If you're concerned about this, have you considered opening an issue on the public repository?

Maybe you have some knowledge to share about how to make the site work well with JavaScript disabled?

Maybe you'll find that the team has thought about this, and are planning it?

A cursory look reveals an issue [0] closely related to this, mentioning that there are .html URLs that are supposed to work without JavaScript; that they have at some point verified the functionality with a screen reader; that Google should be able to index the contents; etc.

Maybe that would be a good place to bring up your concerns?

[0]: https://github.com/fedwiki/wiki-client/issues/39


The wiki I know for telling me all the new cool stuff is junk switches to a SPA? That's ironic.

But seriously, in my eyes, a wiki is excatly what a SPA should not be used for, because of its document based nature. Trying to fix an overload problem with an SPA tackles the problem from the wrong side imho.


And it displays that awful behaviour of pages that get new content after they are loaded...


What's wrong with a single-page app? Wiki has always been a living document, best that the implementation formalize and facilitate a nature. If you really miss the classic format or need time to adapt, nothing stops you from using a Federated Wiki client that generates a page from the JSON.


The canonical form of a wiki page should be a page; it really is a document and so should be in a document format (in contrast, for something like a bank statement the data is primary (in some sort of structured format with e.g. numeric fields) and the page is merely a representation of it).


You speak prescriptively of a canonical form, but I do not know what canon do you refer to. What do you define as a "page"? Are you possibly arbitrarily drawing a line at a HTTP GET request between the article and the edit button? If that's the case, Google Docs and Etherpad would fail to meet your definition of a page/document. Right now we see a declining rate of collaboration on Wikipedia, so it's natural that the evolution of the wiki would do more to encourage editing/forking: http://mashable.com/2013/01/08/wikipedia-losing-editors/


> Google Docs and Etherpad would fail to meet your definition of a page/document.

Indeed. They don't feel like pages or documents (indeed google docs is explicitly an editor and has an export step to produce a webpage), and I would not want a wiki (or wiki-like project) to be maintained in them. They serve a different niche.

> Right now we see a declining rate of collaboration on Wikipedia

For well-know reasons of deletionism and unnecessary barriers to new editors (see http://www.gwern.net/In%20Defense%20Of%20Inclusionism ). The way to fix wikipedia isn't to keep changing things, it's to go back to the policies that worked in wikipedia's golden age.


Speaking for me, it has nothing to do with philosophical principles like "keeping the web as a web of hyperlinked documents" or to use html and http as intended. Rather, it's using the right tool for the problem. Google Docs and Etherpad are word processors first and then document collections. A wiki is a document collection with the ability to edit. For a document collection, it's important that you can simply download, copy and crawl the documents. Of course this is possible with a SPA, but now your solving problems for your tool instead of the other way round.


http://c2.com/cgi/wiki?WikiIsNotWikipedia

Wikipedia is a document collection that happens to be implemented and edited using a wiki. Wikis are much more general tools.

There are many "document collections" that are nearly impossible to crawl. Every have to use PACER or many other databases of scanned PDFs? Many web pages require you to reverse engineer HTML to gain a meaningful sense of structure, where Smallest Federated Wiki has APIs and is made in every way to be copied.


It can be both a SPA and a document. They are not exclusive.


This is based on the Smallest Federated Wiki Ward Cunningham has been working on since 2011. I think it's great that they're moving to new architecture, and the SFW looks great. The interface is still pretty arcane, though.


That sounds interesting. Do you have any more details on this architecture? c2.com seems to be overloaded at the moment.


This is the (admittedly somewhat long) transcription of a talk I use to introduce people to the idea: http://hapgood.us/2014/11/06/federated-education-new-directi...

The elevator pitch for tech-minded people is that wikis are to SVN as federated wikis are to git. Instead of a shared wiki that people modify, each user has their own wiki which consists of both pages they've made or forked, so information propagates back and forth between users. It's a really compelling idea and I hope they can make it work.


So, the bar for contributing to the wiki has been raised from "has access to a computer" to "has access, knowledge, and resources to run a wiki server"? I hope one or more WaaS (wiki as a service) providers will emerge where a user account == a fed wiki node.


That's already how it works. You could run your own node and share information back and forth with other nodes, or you could use an account on an existing node. This is clearly the case: if you go to the C2 Federated Wiki, you'll note the Login button.


Ah, thank you. Still wrapping my head around the new architecture.



I feel sad about this :(

The original wiki was full of the hope of the web.

Here, the UI is unintuitive. The site depends heavily on client-side processing. It loads slowly - there's no intelligent processing in advance. It uses canonical URIs OK, but the window.history updates late and the URIs are designed in a way that makes them difficult to share. The code is hosted on an ethically-dubious commercial site. The wiki has lost all the old data.

I'm doubtful of the author's claim that this will last 20 years except at his own behest - I don't see how this is an improvement?


It seems to me like the people contributing to this project still have lots of hope for the web.

The original wiki had plenty of UI quirks, too. How much time have you invested into figuring out how it works? Is the problem that the main page doesn't explain things clearly enough? Maybe you'd like to contribute some documentation?

With regards to loading times, that's an engineering issue to be improved. Efficient loading of documents is very possible with asynchronous requests; right now it looks a little glitchy, but this is not a fundamental issue.

Where is your hope?


> Is the problem that the main page doesn't explain things clearly enough? Maybe you'd like to contribute some documentation?

No. There are several problems, the first of which is that I am served content as a Javascript-dependent blank page for no good reason.

I'm not sure why you think I would contribute to this project. It contradicts my ideals for the future of the web on many levels, as I already explained, and some of which you overlooked when making your comment.

Firstly, I do not want to support something that represents a backwards movement from my preference, and secondly, any change I would choose to make would likely be reversed as against the spirit of the project.

My belief is that the Benevolent Dictator for Life/cult of personality model of existing open technologies is broken and should be shunned. (With apologies to Ward Cunningham, who I do not mean anything personally towards). People committing their resources to this project are not providing for the diversity that 7 billion web users need.

I have plenty of hope. My hope is that other people will take this technology in a better or just different direction.


I think the Federated Wiki concept is very interesting, and I agree that it would be very nice to serve functioning static pages—this could be added and probably should.

That's why I'm wondering if you have looked into the project's ideals and roadmap. Maybe they have different priorities than you would prefer. Maybe, as I said, they are already working on providing for your needs.

This is not a product someone is selling to you; it's a public work-in-progress with a pretty ambitious idea.

If you think that JavaScript applications in general are opposed to your ideal future of the web, then I understand your disappointment.


Here is my philosophy of Javascript, since it might be relevant here:

Javascript is great when it enhances webpages. When users get a faster and better web experience than they possibly could without it.

Javascript is awful when the fallback that remains, when it inevitably breaks, is worse than what had already existed in the past.


Best wishes on the switch, but I'm afraid that the leap being taken is too big. Suddenly we go from bare HTML served by cgi to a single-page, style-rich and interaction-rich JavaScript application.

That said, I actually like the new website, except for the bottom bar. When i scroll quickly on iPad it doesn't always stay at the bottom, and it is especially ugly when there is one page being viewed. I'm sure there will be people with way stronger opinion on the radical UI change.


Is the switch complete yet though? The new wiki seems to have almost no content...


http://c2.fed.wiki.org/migrating-wiki.html

You might have forgotton how to use a wiki: you create pages that haven't been written yet.


Actually it looks like that the pages themselves just take a long time to load: for example clicking on 'extreme programming' shows the title 'extreme programming' and a blank page. Initially I assumed there is no content, but if I wait 5-10s then the content will be shown. This is very confusing: why show the title and empty page while loading the body?


This…this is abominable. Without JavaScript, there's no search; there's almost no content. With this change, Ward's wiki has gone from an invaluable resource to Geocities.


Not preserving the URLs (or the content?) in the transition, really?


With regards to SEO, I think a lot of you are missing something special about SFW, and that is that each page has a JSON representation. The client only exists to render the human readable view, but the JSON is fully crawlable by a spider. In a lot of ways, this borrows from the days of XML pages where the content was rendered using XSLTs. In this case, the content as well as page revision history and links are preserved as a single JSON object. This actually may make SFW content more crawlable.


Holy SEO Fail Batman... Google only returning ~ 19 pages ( site:c2.fed.wiki.org * ) vs. 80K from c2.com


It looks like each page a user navigates to is appended to the URL. I hope the team develops the notion of "browsing modes" with the option (on by default I would think) to use canonical URLs for a given page. Otherwise, a user trying to visit a bookmark at a later date will (1) wait for all prior pages in their browsing stack to load before the page they bookmarked, and (2) may potentially exceed a maximum URL length depending on how that particular wiki is served.


What's important about this isn't the server or client, but the protocol.

Like all peer to peer networks, all you need is a clear documenting of the protocol and anyone can create their own servers or clients. This way people concerned with SEO can expose the wiki's contents easier, and users can design a more attractive user interface, all while addressing the same content and sharing it the same way.

If this works as a protocol, it should allow anyone to collaborate on documents that synchronize independent of a central source. This is really useful for people who need to be able to share some information in a multitude of ways and on different networks, online or offline. The only downside is it seems the addressable content has a limited scope. And of course, there's probably (i'm guessing?) not much in the way of access controls to prevent people from getting or modifying information they shouldn't have.


This is precisely my take. I wasn't familiar with SFW until last night (this post), but I've been playing around with this concept for awhile myself.

The only real drawback that I'm seeing so far is some link fragility. There are two countering forces at work here. You can take a link to someone else's content and mirror it back to your own SFW. If the source content changes and you don't take the "pull," your content is no longer relevant. This might mean for popular sources, lots of replicated copies of the source, but not always current. There is also a problem with the always available aspect. If someone is using content from another source, what do you do when the source goes down or just changes locations. At least with a non-federated wiki, the entire set of content goes offline together. That model is less prone to rot since it prunes entire branches and not just bits and pieces.


I had to play with the new UI a while before figuring it out. No delete panel but when you go back in time to the left and click a different link, everything that had been to the right of the "clicked panel" disappears. So, you have to sometimes use your browser's "open in new tab" functionality to remember a context for later use - this goes against the one page web app concept, but makes it workable for me. BTW, interesting that the Node server code is written in CoffeeScript.


Smallest Federated Wiki is a great choice for the spirit of C2. But the interface is much more confusing than the old site, and lags behind similar wikis like TiddlyWiki.


The page claims that the new wiki is written with a distributed database but from what I can tell it just writes pages to the file system?

https://github.com/fedwiki/wiki-node-server/blob/master/lib/...

Am I missing something?


He doesn't mean that each server uses a distributed database to store its documents.

The wiki is federated, much like Usenet is through the network of NNTP servers. For more info, see Cunningham's documents "On Federating Wiki" [0] and "Federation Details" [1].

This is distributed in the sense that Git is a distributed source control system.

[0]: https://github.com/WardCunningham/Smallest-Federated-Wiki/wi...

[1]: https://github.com/WardCunningham/Smallest-Federated-Wiki/wi...


That said, there are also options to use databases for page storage on the back end: https://github.com/fedwiki/wiki-node.


Cool, thanks!


Is this new links to the right a new trend? It's the second time this month I'm seeing one of those.


this is a sad day for the internet. Maybe hn seems like the only site left for us low bandwidth people. Another site not loading...


"Log in with LiveJournal"


it took me seconds to load the new wiki (the welcome page), slow by design?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: