PDFs are about the worst thing out there if you're frustrated about documents with embedded Turing-complete languages.
And large bitmaps are rather bandwidth inefficient, and inaccessible to boot.
I don't mind markup. I mind tech that doesn't include fallbacks, and I mind people using Turing-complete languages for things that can easily be done using a less powerful language. (Turing-complete languages tend to get abused by people to the point that people want them to be executed quickly and with little memory use (which means that the language implementation gets complex, and hence buggy {simple laws of probability}), and almost any vulnerability can generally be exploited by a suitable script in a TC language, and TC-languages can be used to do unexpected things.)
HTML is decent. HTML+CSS is worse than HTML (for the same reason that COMEFROM is worse than GOTO). Markup is better than HTML in many ways, or rather most formal specifications of Markup-like languages are better than HTML. (Unfortunately, most variants of Markup aren't specified beyond "do what I do")
I've been tempted for a while to write a Markup browser. (Mop? Markup-Over-tcP? )
So, I'm replying mainly to his assertion that any interactive app can be prerendered on the server given current technology. Given webapps span the range from "dynamic forms" to browser video games, I can think of no way of implementing the full breadth of what are modern web applications other than essentially video streaming and polling input on the client, which yes, is bandwidth intensive.
That's the entire reason why js and html and friends have become so popular. You essentially have full applications that have been developed by people who recognize that the bottleneck wrt to a networked application's responsiveness is not cpu cycles and memory in 2015 but internet latency. It's sufficient here to send a small text file and have the client fill in the gaps and have the client ajax for updates than have the server do the heavy work. I think you argued elsewhere, this is a waste of processor time redundantly rendering content, and thus, a waste of energy needlessly (as in real energy, from electrical into waste heat). As a tree hugger, I have to admit I find that argument somewhat compelling. I think a balance needs to be struck between user experience and use of resources. It might be your balance line lies further towards the "saving of resources" than mine's does.
Again, there is a (massive) distinction between Turing-complete languages and non Turing-complete ones in terms of how often they are exploited (and in terms of how severe said exploits tend to be). I don't mind non Turing-complete languages being run clientside, but do mind Turing-complete languages being run clientside, for mainly that reason. (Also, because Turing-complete languages tend to end up abused in terms of resource use).
And you can do most of a thin client, if not all, while keeping sane bandwidth use. (Note that almost all cases where bandwidth use would be excessive for a thin client in a web setting, said bandwidth use would also be excessive for a fat client in a web setting.)
It's mainly that current tech has settled on the brute-force approach of "let's stream every pixel and then try to compress it" as opposed to saner approaches (vector graphics, remote compositing, that sort of thing). But note that there are, for example, remote desktop protocols that work well (for most things) over a dial-up connection!
Umm... no. That sounds absolutely awful in several ways. I just believe the web is about choice, and users shouldn't be expected to use the exact setup that the service providers demand. You can avoid a huge class of vulnerabilities by disabling javascript, not to mention all the privacy benefits of doing so. With javascript disabled, all trackers can collect is a timestamped log of when you made what request — which is sensitive information, but not as bad as also having your cursor movements and scrolling and everything else logged. Disabling javascript can also massively improve performance.
That doesn't mean that websites shouldn't use javascript — I'd not want to use a todo list application that forces me to reload the page on every click — but it's still useful to support that functionality, because some day I might need to access my todos from an environment where I can't use javascript for whatever reason.
People claim that it's too much work to support something so niche, but if you architect your application well, using progressive enhancement, you get support for no-javascript environments, plus a ton of other stuff like server-side prerendering for latency reduction, accessibility, SEO, text-based browser functionality — all virtually for free.
Not that this should be applied everywhere dogmatically, it's not worth supporting no-javascript environments or scrapers in your browser-based photo editor.
It would be even better if servers would return raw data in xml format, with a linked xslt transform to convert it to a structured document, and css to manage the presentation with javascript to control the interactivity, but that idea died long ago, but some modern applications are finally replicating most of the good parts of that system.
I think you have something here. I'm only a noob web hacker, but from what I gather and read, I think that separating content from the application has been one of the things web developers strive for. It isn't always easy, however (which no, is not always a good excuse but is probably often used). This separation is essentially is what html, and even pdf, is supposed to be for.
I was more wondering what you meant by prerendering things, because even when one opens an html page in your browser which is on your local drive, the client is doing rendering (typesetting, and such). That is an extreme...so, it sounds like you were asking for a return to the pre-ajax internet, which you say here that it doesn't fit everything.
So I think we don't really disagree here, I could have misunderstood you?