Hacker News new | past | comments | ask | show | jobs | submit login

Citation needed? Is there any reason to believe that if the browsers had insisted on wellformed documented and provided errors like "error at line X, table tag not closed" that people would not have been able to fixup documents? I don't believe that's would have stopped things.

But that exact behaviour, trying to infer intent, meant that tons of unspecified behaviour had to be added to all browsers to try to mimic which each one did to handle totally invalid cases.

So, even if leniency did make it easier to create a web page, it also contributed greatly to the already difficult task of creating consistent cross-browser rendering.

Look at JavaScript, and the recent semi-colon debacle with Bootstrap and some other tool. Having "implementer defined" leniency just means you'll get multiple interpretation and problems.




We've tried this experiment - it's called XHTML. For a long time, adoption by authors of the strict error handling it offered was stymied by lack of support in MSIE. So it's not a full counter-factual. However, we have learned two things:

(1) Now that MSIE does support true XML parsing of XHTML, almost no one is choosing to use it over HTML.

(2) Of the few experts who conditionally served either text/html or application/xhtml+xml depending on the UA, or serve XML unconditionally now, almost all have bugs in their sites which can get them to produce ill-formed XML which then shows an error page in the browser (for instance, submitting comments with certain sorts of errors). This is evidence that the draconian error handling approach is too challenging even for experts and imposes the costs of small mistakes on users.


I think the bigger lesson to be learned is that after poorly followed ad hoc standards have made a mess of things, it's hard to come in and clean up later.


A nontechnical user, given a choice between two environments, one of which nags them pedantically over technical details, and another which displays the gist of entered content but perhaps with sometimes screwy formatting which one would win?

Word processors won out over text processors for the nontechnical user partially for this reason.

Postel's law applied to HTML let nontechnical users get things done with less impedance. It's less important now not because it was the wrong choice, but because users have moved higher up the stack to CMSes that handle formatting etc.

Consistency across browsers back then was only ever of serious concern to professionals in design or browser programming.


Writing correct HTML is not significantly harder than writing crappy HTML. Comparing it to latex vs word isn't a good analogy IMO. And wouldn't nontechnical users be using higher level HTML editors anyway? At least some bad HTML comes from lazy developers who should know better, and could have done better with a stricter tool.


It's not entirely trivial to write out correct HTML. In addition to properly closing tags, you need to deal with optional end tags (many get this wrong), optional start tags, the odd comment syntax, weird exceptionality wrt escaping esp. in script tags, context-dependant validity such as no block-level elements in inline elements (or nested a tags, or tags whose "type" depends on their attributes).

It's obviously easier to write than to read; but it's definitely easy to make a mistake. There's a lot of illogical cruft that's accumulated in HTML; so even a careful implementer might make a mistake (and might not detect it since most other implementations are so liberal).


I'm was talking about the 90s.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: