For example, this CSS[1] resulted in a professional-looking 'blog post' layout complete with a header, subtitle, metadata, and footer when paired with a plaintext file.
I've also used this functionality to directly add titles to non-HTML content such as images[2]. Unfortunately no extant browser engines support this anymore.
It so happens that two months ago my cell ISP revamped their user page in such way that it no longer displays ANYTHING. No accounting, no payments, no traffic, not a single letter. On all of my browsers. Just because it (possibly) looks cool? Or money must be spent. Or whatever. And you can't file a complain - all you get is an AI chat bot. Of course it doesn't help bringing the site back of getting HTTP API. But it excels at annoying customers.
Anyway, customer support was almost always useless: "Update you browser, buy a new device" is all they have to say.
I'm not clear which bit doesn't work on other browsers. Is it the use of a Style Link in the headers rather than body? Or the use of an inline stylesheet as a data URI? Or the styling of a document with no content? Although the combination is unusual, none of those seems particularly exotic.
Agreed, also in the MDN compatibility table it says it's experimental, but that Chrome supports it, so maybe there's a difference in browser implementation handling it?
There is nothing worse than believing something that isn't true. Imagine trouble-shooting a front-end issue, and not knowing about this. You think you know where all the CSS is coming from, and that it couldn't possibly produce the effect you're seeing. But actually you do not know where all the CSS is coming from. Adding subresources via http header is pure evil.
Well, then I don't think I respect your opinion. It's true that tab state is, in general, a function over a truly vast array of variables, plus transitives. But there are some invariants that have been present in the browser from the beginning which constrain this diversity. This feature removes a foundational invariant, namely that the content (and style) of a page depends on the content of a resource. This is a profound mistake that is not in the same league as, for example, a misconfigured webpack. It is more akin to an intrusive, badly written browser extension that injects random crap into a page, but this is far worse because of its novelty, and because of its (potential) scale.
The fact of the matter is that what-a-resource-is includes the headers it is served with and the context in which it is requested. Consider https://lcamtuf.coredump.cx/squirrel/
It may not be normal in your day-to-day work, but it's definitely an important point of extensibility. Extensibility is indeed opposed to management (see _The Principle of Least Power_ for more on that front) but that is a trade off that worked really well for the web for decades. I for one would be really sad to see this further limited and would love for browser support for `Link` header behavior to increase rather than decrease.
By that reasoning content-length should include headers. Needless to say you have not changed my mind. There's never going to be a hard and fast rule distinguishing headers from content. It's all just data after all. That doesn't mean it's a good idea to eliminate the distinction entirely.
It's almost as if software engineers are doomed to this pendulum of constraining and then under constraining. It happened with SQL and noSQL and it seems to be happening now with HTTP. Constraints are for your own benefit but maybe it takes time to really grok that.
That is a very strange question. Why would you ask it? Content-Type is the canonically proper use of an http header field, as it is metadata about the content, rather than content itself.
One could argue that for some content, how to style it is just a suggestion anyway. Ie you could think of “our style guide for text content can be found at this address” as metadata for semantic HTML documents
That breaks down when you try to think like a content creator, nowadays everyone talks about UX, I'd say the styling has at least equal weight in terms of communicating ideas. (net negative if the document is shown without it applied, really pissed artists etc)
Can I just say. I wish web pages were actually programmable. Every time I put together a website now I think damn, look at all this content stuck inside static html. I wish it was in a database and loaded from there, even if that database was a simple text file or whatever else. The web needs some evolution that isn't react and next.js. Going back to pure text is killer.
We could have tested those idea in a similar fashion as React and next.js in Javascript, and incorporate what works as native HTML. Before sort of refactoring HTML into something else.
But Google doesn't want that. Google wants everything on the web to be Javascript. So yes. Blame Google.
Does anyone know why XHTML failed? I remember deriving some satisfaction from making my website XHTML compliant back in the day.
Wouldn't it make it much easier to parse? Or does the motivation to "make stuff still try to work even when it's clearly broken" override that? (The same motivation gave us everything that's wrong with JavaScript, so I'm not sure I agree with it...)
XHTML was not backwards compatible and was harder to write for essentially no benefit to most people. "It's XML" was basically the entire list of selling points, and that was just not a compelling enough proposition — in fact, for many people, that was negative value.
I think it was HTML4 that was not forwards compatible.
XML is a subset of SGML, and HTML4 was SGML. XHTML was / is XML. So HTML4 parsers (as being SGML parsers) shouldn't have issues with XHTML.
It's the other way around: XML parsers don't like HTML. So the attempt failed to change the default parsers to XML; as old webpages had than issues, and the web crowd didn't like to fix that. People wanted to continue to use the horrible SGML mess—only because nobody wanted to touch their HTML4.
So Google created their own web "standards council" and ignored the W3C henceforth. The rest is history.
The only part I don't get isn't even about the page in question:
> My first reaction was “why not just deliver something with Content-Type: text/plain; charset=utf-8 and dispense with the invalid code, but perhaps that’s just me overthinking the non-existent problem.
Yes, it's about the HTML-less plain text page https://no-ht.ml which is UTF-8 plain text which browsers should be able to render.
Does anyone here have deep knowledge, quotes from grimoires, or relevant strange tales?
I tested with a bunch of browsers on mobile and desktop - some browsers forced a download of plain text and / or wouldn't render the characters properly.
Some browsers assume the server is lying and think they know best.
Would a webserver configuration file count? The code on the webserver minifies the CSS file and embeds it into a Link header - you could configure a webserver to respond with a generated Link header and no content to a certain URL.
Finding the source in the header was obvious, but I can't figure how to decode that properly. I can see the rules for body{} and @keyframes{}, but no idea where all the text (the before/after rules) are hiding..
The header " The Page With No Code " is in the html::before content property.
The first paragraph "It all started when..." is in the body::before content.
The second paragraph "This web page has..." is in the body::after content.
And finally, the footnote "* Obviously there's some code somewhere..." is in the html::after content property.
I found those in the inspector, sure. But I can't find them in the base64 encoded header.
Edit: Nevermind. I tried like 10 different online base64 decoders before, till I found one wich didn't show just garbage (maybe they all hiccup on the emojis?), but even that one didn't decode properly as it seems. Either the eleventh one now told me the whole truth, or I should've used curl -i from the start instead of copy/pasting from Firefox's developer tools.
> The btoa() method creates a Base64-encoded ASCII string from a binary string (i.e., a string in which each character in the string is treated as a byte of binary data).
> The web is an app delivery platform now, not just an information delivery platform.
No arguments there.
> Plus, we have a lot of processing power on the client side, which should be utilised.
I don't agree with the "should" here at all. Lots of the stuff is done on the client these days which could be done cheaper on the server. If I'm visiting a site that's developed as an application then I fully expect to running a bunch of JS, no qualms there. But there is no reason my phone should be forced to build a bunch of HTML just so I can read an article, and that's the problem: people are developing information sites as "apps". This is a large factor of why my phone can't hold a charge for a full day.
> people are developing information sites as "apps"
This statement nails the problem that all of us suffer from. It’s like people have to add “feature” to make their site compelling because the information on it is not good enough.
The word "app" is a bit overloaded and I really just mean a buttload of client-side JS. What you're describing there is app-like but can be easily delivered with server-render HTML with minimal client-side JS enhancements. When I say "app" I'm more talking about an in-browser text editor, an image editor, a DAW, etc.
At the risk of sounding like a broken record, try and switch javascript off on your mobile device. It's like using an ad blocker, really.
But because we're not neanderthals, you can also whitelist the sites/apps you like to run JS (in Chrome) and/or use another browser for your JS-powered sites/apps.
Right, but I often want to read those sites so I don't bother (but perhaps I should go for it). Though really I'm speaking vicariously through the hoards of non-technical people who don't even know what JS is who enjoy long battery life.
You can enable per domain by clicking on the lock icon by the URL.
That said the modern web is generally very broken with js off, but it does get rid of most cookie banners, modal email sign ups prompts and generally lets your read article number 6 of the 5 free ones.
The web was an app delivery platform 20 years ago, at least that’s what the folks at Sun and Adobe tried to tell us. But then everyone pitched a collective fit and rejected injection of a runtime container into the browser. Now we have the current situation of an inferior technology and architecture being used as an app delivery platform. If one views a web browser as a proper platform for hosting applications I’d argue WASM and even the aforementioned tech (applets/flash) are superior.
In the way I think IntellJ is a better application than attempting to rewrite IntellJ in JS and run it in the browser. This would hold for any number of applications. HTTP, HTML and the DOM were not designed to run rich applications. It’s been a continuous series of hacks to, IMO, solve a problem that was solved a year or two after this nonsense started. The problem was that of application delivery and updating. But iTunes quickly showed widely accepted software did not need to be distributed on physical media but could be installed and updated over the network.
> Plus, we have a lot of processing power on the client side, which should be utilised.
It would be one thing if this was a matter of consent. It isn't. It uses your resources without your permission. Leaving JS on is not permission to consume as much CPU as you want (otherwise, every website would also run a crypto miner).
> If a site uses it badly, just don't go to that site.
This sort of blitheness is very surprising to me. What if we're talking about a passport site, or an unemployment benefits site, and so on? Why is the onus on the user and not the developer?
There needs to be an API in the client for specifying how much CPU can be used. Clearly things like crypto mining in the bg are considered abuse but what about just poorly written js? Clients should be able to police the code they execute.
The choice is with the user. The onus is on the developer to build something good and efficient, otherwise people will not use the product.
If the user is forced to use a terrible government website (as we all are from time to time), removing JS will fix absolutely nothing there. If anything, it might make things worse, as there will be a bunch of things you simply cannot do without JS (e.g cropping a photo to upload to a passport site - how would you even draw out the crop region without JS?!)
JS is not the problem. Crappy developers and corrupt officials who award key contracts to crappy developers are the problem.
Depends on what you're doing. Everyone uses the word "website" like it's just one type of thing. You DON'T need JavaScript for a website that has no interactability, like a personal blog with no comments or reactions features. You DO need it for... well, almost anything else you want to let the end user do.
I think a lot of these "you don't need JS" things you see out there are either (a) a sensible reminder that your JS-filled app doesn't have to do EVERYTHING with JavaScript (looking at you, JSS), (b) a contrarian "JS sucks" take, (c) an appeal to use outdated tech like PHP instead, or (d) just a clever way of getting attention on sites like hackernews.
There is a no-JS fallback. If you disable JS with uBlock Origin or whatever, then reload the page and try voting, you will see the page reload. But thread collapsing won't work at all.
My argument lately has been: Should we be allowing code to run inside of our documents? Am I okay with PDFs, jpegs, mp4s etc. running arbitrary code when I open them?
imo the extension is how we communicate the purpose of the contents: if it's a .js or .exe file, expect execution, and take appropriate precautions
while media _shouldn't_ run arbitrary code, there's also nothing stopping a malicious actor from crafting something that breaks through a buffer overflow in the .mp4 codec for example
the safest bet would probably be "assume anything can run arbitrary code, even if it's not supposed to"
> Plus, we have a lot of processing power on the client side, which should be utilised.
Preferrably in useful work. The fact that end user interfaces are sometimes slower than what they were in the late 90s suggests this is often not the case.
It's still nice to make your page compatible with users who have Javascript turned off by default (noscript/umatrix). That way you can still use ancient HTML forms for server-side interactivity.
Firefox 50 was released like a week after Trump was elected. They do have an LTS version if you just want stability: https://www.mozilla.org/en-US/firefox/enterprise/ Or it was just a typo I don't know sorry
Nothing loaded for me in Chrome. Just a blank page and the following warning in the console:
"DevTools failed to load source map: Could not load content for chrome-extension://[removed]/sourceMap/chrome/scripts/iframe_form_check.map: System error: net::ERR_BLOCKED_BY_CLIENT"
Which might not be related, not sure, and not awake enough to dig into it further. Did it not load properly for anyone else?
That error indicates something weird going on with one of your extensions. The guid tells you what extension is generating the error so you can disable it.
I don't know why your browser is blocking a request to open the source code for one of your extensions. Maybe an extension trying to hide its own code?
To find out what's causing this, disable all extensions and enabled them one by one until the problem occurs, I suppose. It'll be harder to find if your problem is caused by two different extensions, in which case you're going to need to test pairs...
- open developer tools, navigate to network and refresh
- or run from a terminal "curl -s --head 'https://danq.me/wp-content/no-code-webpage/' | sed -nE 's/.*base64,([^>]+)>.*/\1/p'"
- replace the base64 into the below code:
document.head.innerHTML='<style>'+decodeURIComponent(atob('Ym9...9IA').split('').map(c => '%' + ('00' + c.charCodeAt(0).toString(16)).slice(-2)).join(''))+'</style>'
- paste the above string into the developer tools console and press return