Hacker News new | past | comments | ask | show | jobs | submit login
A page with no code (danq.me)
375 points by edent on Jan 21, 2023 | hide | past | favorite | 130 comments



For non firefox users wanting to manually render the page, grab the base64 output from the link response header, e.g.

- open developer tools, navigate to network and refresh

- or run from a terminal "curl -s --head 'https://danq.me/wp-content/no-code-webpage/' | sed -nE 's/.*base64,([^>]+)>.*/\1/p'"

- replace the base64 into the below code:

document.head.innerHTML='<style>'+decodeURIComponent(atob('Ym9...9IA').split('').map(c => '%' + ('00' + c.charCodeAt(0).toString(16)).slice(-2)).join(''))+'</style>'

- paste the above string into the developer tools console and press return


Or, use a different browser just to view this page.


Come on, this is Hacker News, not Yahoo Answers.


Its funny you said Yahoo Answer and not Stack Overflow, Reddit or something similar. Haven't heard of the name for a very very long time.


I mean, a hack that actually worked would be more impressive than a hack that doesn't generally work, except on one implementation.


Related to https://no-ht.ml :

A few years ago, I wrote a small js library to convert HTML to unicode: https://www.npmjs.com/package/html2unicode


I made some similar concepts using generated content via Link header stylesheets over a decade ago at https://code.eligrey.com/css/link-header

For example, this CSS[1] resulted in a professional-looking 'blog post' layout complete with a header, subtitle, metadata, and footer when paired with a plaintext file.

I've also used this functionality to directly add titles to non-HTML content such as images[2]. Unfortunately no extant browser engines support this anymore.

1. https://code.eligrey.com/css/link-header/lorem-ipsum-2.css

2. https://eligrey.com/blog/title-image-files-in-opera/ (contains dead link to demo)


Neat, but this part is a bit odd:

> As a fan of CSS Naked Day and a firm believer in using JS only for progressive enhancement, I’m obviously in favour.

...And the page it is written on is slightly broken (images follow enlarged thumbnails of themselves) in FF with JS off.


Thanks; I hadn't noticed that. It's a side-effect from some terrible CMS plugin I thought I'd disabled. Fixing it now.

Edit: Fixed! Thanks again!


JS is SO overused and abused right now!

It so happens that two months ago my cell ISP revamped their user page in such way that it no longer displays ANYTHING. No accounting, no payments, no traffic, not a single letter. On all of my browsers. Just because it (possibly) looks cool? Or money must be spent. Or whatever. And you can't file a complain - all you get is an AI chat bot. Of course it doesn't help bringing the site back of getting HTTP API. But it excels at annoying customers.

Anyway, customer support was almost always useless: "Update you browser, buy a new device" is all they have to say.


Maybe a better solution would be "Find a new ISP"?


Sadly, the other one did the same thing, almost simultaneously :(


I'm not clear which bit doesn't work on other browsers. Is it the use of a Style Link in the headers rather than body? Or the use of an inline stylesheet as a data URI? Or the styling of a document with no content? Although the combination is unusual, none of those seems particularly exotic.


Agreed, also in the MDN compatibility table it says it's experimental, but that Chrome supports it, so maybe there's a difference in browser implementation handling it?

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Li...


Link header seemed very exotic to me. I was very surprised to hear that that exists


Same. And honestly its existence concerns me greatly, non-ironically.


Curious, what’s concerning? Links à la IETF RFC 5988 are pretty vanilla web stuff.


There is nothing worse than believing something that isn't true. Imagine trouble-shooting a front-end issue, and not knowing about this. You think you know where all the CSS is coming from, and that it couldn't possibly produce the effect you're seeing. But actually you do not know where all the CSS is coming from. Adding subresources via http header is pure evil.


Seems like there’s a multitude of similar situations to deal with with modern front development. This would hardly be more “evil” than anything else.


Well, then I don't think I respect your opinion. It's true that tab state is, in general, a function over a truly vast array of variables, plus transitives. But there are some invariants that have been present in the browser from the beginning which constrain this diversity. This feature removes a foundational invariant, namely that the content (and style) of a page depends on the content of a resource. This is a profound mistake that is not in the same league as, for example, a misconfigured webpack. It is more akin to an intrusive, badly written browser extension that injects random crap into a page, but this is far worse because of its novelty, and because of its (potential) scale.

This feature needs to die.


The fact of the matter is that what-a-resource-is includes the headers it is served with and the context in which it is requested. Consider https://lcamtuf.coredump.cx/squirrel/

It may not be normal in your day-to-day work, but it's definitely an important point of extensibility. Extensibility is indeed opposed to management (see _The Principle of Least Power_ for more on that front) but that is a trade off that worked really well for the web for decades. I for one would be really sad to see this further limited and would love for browser support for `Link` header behavior to increase rather than decrease.


By that reasoning content-length should include headers. Needless to say you have not changed my mind. There's never going to be a hard and fast rule distinguishing headers from content. It's all just data after all. That doesn't mean it's a good idea to eliminate the distinction entirely.

It's almost as if software engineers are doomed to this pendulum of constraining and then under constraining. It happened with SQL and noSQL and it seems to be happening now with HTTP. Constraints are for your own benefit but maybe it takes time to really grok that.


At a cursory search, Link appears in 1996. It was there before CSS.

https://www.rfc-editor.org/rfc/rfc1945#appendix-D.2.6


Wow, you're right! Good that it was summarily ignored until now I guess.


I fear we’re getting into xkcd 386 but…

As svieira says above, Link is a standard thing. It’s widely used. CSS is even based on it. Another example is ‘canonical’.

https://http.dev/link#canonical

You may not have encountered it personally, but I don’t think it’s wise to extrapolate from that.


I've been making websites since 1996 and I've never seen it. Not once.


No, content and the resource are two different things. The resource is the content and its metadata. Content length is what it says.


So would you also want the Content-Type header to die?


That is a very strange question. Why would you ask it? Content-Type is the canonically proper use of an http header field, as it is metadata about the content, rather than content itself.


One could argue that for some content, how to style it is just a suggestion anyway. Ie you could think of “our style guide for text content can be found at this address” as metadata for semantic HTML documents


That breaks down when you try to think like a content creator, nowadays everyone talks about UX, I'd say the styling has at least equal weight in terms of communicating ideas. (net negative if the document is shown without it applied, really pissed artists etc)


To see the magic:

    curl -I https://danq.me/wp-content/no-code-webpage/ | grep -oP 'base64,\K\w+' | base64 -d


Can I just say. I wish web pages were actually programmable. Every time I put together a website now I think damn, look at all this content stuck inside static html. I wish it was in a database and loaded from there, even if that database was a simple text file or whatever else. The web needs some evolution that isn't react and next.js. Going back to pure text is killer.


Web pages are programmable - way more so than desktop or mobile apps.

You can open up the DevTools and start poking around - including running your own JS against the page DOM using the console.

You can write bookmarklets, or user scripts, or even fill browser extensions that run your own custom scripts.

You can even run a headless browser to automate this stuff further - a trick I use a bunch with my shot-scraper CLI tool, see https://simonwillison.net/2022/Mar/14/scraping-web-pages-sho... and https://til.simonwillison.net/shot-scraper/scraping-flourish


Sounds like messengers with one to many communication feature (e.g. Telegram) fit this requirement at least partially.

It might be not as interactive as mostly of websites, but it allows you to get data through API or develop own client.


Basically remaking something that replaces HTML.

We could have tested those idea in a similar fashion as React and next.js in Javascript, and incorporate what works as native HTML. Before sort of refactoring HTML into something else.

But Google doesn't want that. Google wants everything on the web to be Javascript. So yes. Blame Google.


XSLT?


Have you tried coding XSLT before? I had to for one job, it was a cool idea, but extremely difficult to do anything, and very restrictive.

As much as I wanted it to work out (and tried making a personal website using it for a little while), I would never go back to it today.


Frankly HTML5 isn't XML any more. Very bad move…


Does anyone know why XHTML failed? I remember deriving some satisfaction from making my website XHTML compliant back in the day.

Wouldn't it make it much easier to parse? Or does the motivation to "make stuff still try to work even when it's clearly broken" override that? (The same motivation gave us everything that's wrong with JavaScript, so I'm not sure I agree with it...)


XHTML was not backwards compatible and was harder to write for essentially no benefit to most people. "It's XML" was basically the entire list of selling points, and that was just not a compelling enough proposition — in fact, for many people, that was negative value.


> XHTML was not backwards compatible

Sure?

I think it was HTML4 that was not forwards compatible.

XML is a subset of SGML, and HTML4 was SGML. XHTML was / is XML. So HTML4 parsers (as being SGML parsers) shouldn't have issues with XHTML.

It's the other way around: XML parsers don't like HTML. So the attempt failed to change the default parsers to XML; as old webpages had than issues, and the web crowd didn't like to fix that. People wanted to continue to use the horrible SGML mess—only because nobody wanted to touch their HTML4.

So Google created their own web "standards council" and ignored the W3C henceforth. The rest is history.


You can still write HTML in XML syntax. It's not wrong afaik.

But HTML can be also written in some absolute quirky way that isn't even SGML. Just a horrible mess!


The only part I don't get isn't even about the page in question:

> My first reaction was “why not just deliver something with Content-Type: text/plain; charset=utf-8 and dispense with the invalid code, but perhaps that’s just me overthinking the non-existent problem.

Yes, it's about the HTML-less plain text page https://no-ht.ml which is UTF-8 plain text which browsers should be able to render.

Does anyone here have deep knowledge, quotes from grimoires, or relevant strange tales?


I wrote about it at https://shkspr.mobi/blog/2022/12/you-dont-need-html/

I tested with a bunch of browsers on mobile and desktop - some browsers forced a download of plain text and / or wouldn't render the characters properly.

Some browsers assume the server is lying and think they know best.


Thanks for the post :)

I am wondering: why do you use the obsolete <plaintext> instead of <pre>?

Why do you use the `ml` extension?

By the way, I noticed some issues in the page:

- trailing whitespaces

- mixed of spaces/tabs

- some wide characters such as ▽ seems to create aligning issues


> why do you use the obsolete <plaintext> instead of <pre>?

Because it is funny.

> Why do you use the `ml` extension?

Because it is funny. Also, it was free.

> By the way, I noticed some issues in the page:

The source is open. Feel free to correct those issues.



Good job! That's so clever it crashed my Firefox.


A plain text document caused as crash? That sounds really strange.

It works fine for me on latest Linux Firefox.


It didn't like me clicking around in the dev tools trying to figure out how it works. I can't reproduce it now, so I suppose it was a freak bug.


Oh, the dev tools?!

Yes, there's something not OK in FF in this regard. As I've clicked around in the inspector my fan went on and FF did hang for one or two seconds.

My best guess would be that it doesn't like such a "long line" (as it shows all the chars without line breaks in the inspector).

Also last year FF crashed once on my box while using the dev tools. I was shocked as usually nothing crashes on this box, ever.

So they have likely some severe bugs left in the dev tools, I guess.


Sigh. Wait until she learns about the static initialization order fiasco...


Oh my. Should have gone here... https://news.ycombinator.com/item?id=34464993


Does it also work if the response code is changed to HTTP 204 No Content?


It does.


Technically there is code on the back-end server, now make a webpage with no code at all


Would a webserver configuration file count? The code on the webserver minifies the CSS file and embeds it into a Link header - you could configure a webserver to respond with a generated Link header and no content to a certain URL.

Here is an example configuration file for nginx: https://snip.dssr.ch/?7a4e21dde5f7916b#2aPagYEg3kuWg3YN8Q5oz...

Though I would argue that the header-embedded CSS is still code, regardless how it is delivered.


An empty file is technically a webpage, right?


Is 'about:mozilla' close enough for you? (Firefox only obv.)


Easy. Just use the nocode language for that.

https://github.com/kelseyhightower/nocode


I'm going wait until the Linux install script has been merged - https://github.com/kelseyhightower/nocode/pull/4967


base64 encoded url thingy


Thinking of blob urls lol


Finding the source in the header was obvious, but I can't figure how to decode that properly. I can see the rules for body{} and @keyframes{}, but no idea where all the text (the before/after rules) are hiding..


The header " The Page With No Code " is in the html::before content property. The first paragraph "It all started when..." is in the body::before content. The second paragraph "This web page has..." is in the body::after content. And finally, the footnote "* Obviously there's some code somewhere..." is in the html::after content property.

I ran the decoded base64 string through a CSS beautifier; this might help: https://gist.github.com/ldjb/8c6b6d83f2acd3cef01fcf56b36d65f...


I found those in the inspector, sure. But I can't find them in the base64 encoded header.

Edit: Nevermind. I tried like 10 different online base64 decoders before, till I found one wich didn't show just garbage (maybe they all hiccup on the emojis?), but even that one didn't decode properly as it seems. Either the eleventh one now told me the whole truth, or I should've used curl -i from the start instead of copy/pasting from Firefox's developer tools.


  curl -SsI https://danq.me/wp-content/no-code-webpage/ | \
    grep -i ^link: | \
    cut -d, -f2 | cut -d\> -f1 | \
    base64 -d
The Base64 encoding is invalid, or at least not ideal, as the '==' padding has been removed [1]. Restoring it works:

  (curl -SsI https://danq.me/wp-content/no-code-webpage/ | \
    grep -i ^link: | \
    cut -d, -f2 | cut -d\> -f1; echo '==') | \
    base64 -d
[1] https://gist.github.com/Dan-Q/fc308a8a4aca2934312939f92eaa9d...


> ... instead of copy/pasting from Firefox's developer tools.

You may toggle the "raw" switch in the upper-right corner, or Firefox will trim the header like "jh6…W5n".


FYI browsers have built in functions for decoding base64, btoa() and atob(). Although I can never remember which is which.


Guessing from the names:

btoa() should be base64 to ascii

atob() should be ascii to base64

Edit: it’s other way around, lolwut. https://developer.mozilla.org/en-US/docs/Web/API/atob


No-one uses these often, and you have a 50% of guessing right every time!


This other page has some more details

> The btoa() method creates a Base64-encoded ASCII string from a binary string (i.e., a string in which each character in the string is treated as a byte of binary data).

https://developer.mozilla.org/en-US/docs/Web/API/btoa

So the naming makes sense after all.

btoa() is binary to (base64 encoded) ascii.

But I’m still not gonna remember this either :^)


Related:

You Don't Need HTML - https://news.ycombinator.com/item?id=33645398 - Nov 2022 (32 comments)

HTML is all you need to make a website - https://news.ycombinator.com/item?id=33642490 - Nov 2022 (153 comments)


I think it's time to write a blog article under the name you don't need the internet.



That's a newspaper or a zine, isn't it?


It's a file on a portable storage medium.


RIP screen readers


swf is all you need


big hug


Content-type: image/gif


You can do a whole lot with just 20 lines of modern JS.


No JS on that page either!


Well, if it's not in the body of the response...


People can choose to avoid JS if they want, but a lot of users get a HUGE amount of value from JS.

The web is an app delivery platform now, not just an information delivery platform.

Plus, we have a lot of processing power on the client side, which should be utilised.

JS, when used well is not a bad thing. If a site uses it badly, just don't go to that site.


> The web is an app delivery platform now, not just an information delivery platform.

No arguments there.

> Plus, we have a lot of processing power on the client side, which should be utilised.

I don't agree with the "should" here at all. Lots of the stuff is done on the client these days which could be done cheaper on the server. If I'm visiting a site that's developed as an application then I fully expect to running a bunch of JS, no qualms there. But there is no reason my phone should be forced to build a bunch of HTML just so I can read an article, and that's the problem: people are developing information sites as "apps". This is a large factor of why my phone can't hold a charge for a full day.


> people are developing information sites as "apps"

This statement nails the problem that all of us suffer from. It’s like people have to add “feature” to make their site compelling because the information on it is not good enough.


I would happily build a service that simply took all the JS heavy sites people visit and present them as basic HTML.

Unfortunately, I don't think there'd be enough takers to make it a viable product.

Information for free, without ads, is tough to do.


> But there is no reason my phone should be forced to build a bunch of HTML just so I can read an article

But the goal usually isn’t to let you read an article.

Often the goal is to get you to read another article, click an ad, sign up for a newsletter, or buy a subscription. Apps can be good for all that.


Original comment: users get a HUGE amount of value from JS

You are correct here: the goal is to get you to read another article, click an ad, sign up for a newsletter, or buy a subscription

But I think that contradicts the idea users get a lot of value from JS.

You are correct, and


The word "app" is a bit overloaded and I really just mean a buttload of client-side JS. What you're describing there is app-like but can be easily delivered with server-render HTML with minimal client-side JS enhancements. When I say "app" I'm more talking about an in-browser text editor, an image editor, a DAW, etc.


At the risk of sounding like a broken record, try and switch javascript off on your mobile device. It's like using an ad blocker, really.

But because we're not neanderthals, you can also whitelist the sites/apps you like to run JS (in Chrome) and/or use another browser for your JS-powered sites/apps.


Right, but I often want to read those sites so I don't bother (but perhaps I should go for it). Though really I'm speaking vicariously through the hoards of non-technical people who don't even know what JS is who enjoy long battery life.


You can enable per domain by clicking on the lock icon by the URL.

That said the modern web is generally very broken with js off, but it does get rid of most cookie banners, modal email sign ups prompts and generally lets your read article number 6 of the 5 free ones.


The web was an app delivery platform 20 years ago, at least that’s what the folks at Sun and Adobe tried to tell us. But then everyone pitched a collective fit and rejected injection of a runtime container into the browser. Now we have the current situation of an inferior technology and architecture being used as an app delivery platform. If one views a web browser as a proper platform for hosting applications I’d argue WASM and even the aforementioned tech (applets/flash) are superior.


In what ways are applets/flash superior to browser engines/JS? I really can't see it at all.


In the way I think IntellJ is a better application than attempting to rewrite IntellJ in JS and run it in the browser. This would hold for any number of applications. HTTP, HTML and the DOM were not designed to run rich applications. It’s been a continuous series of hacks to, IMO, solve a problem that was solved a year or two after this nonsense started. The problem was that of application delivery and updating. But iTunes quickly showed widely accepted software did not need to be distributed on physical media but could be installed and updated over the network.


> Plus, we have a lot of processing power on the client side, which should be utilised.

It would be one thing if this was a matter of consent. It isn't. It uses your resources without your permission. Leaving JS on is not permission to consume as much CPU as you want (otherwise, every website would also run a crypto miner).

> If a site uses it badly, just don't go to that site.

This sort of blitheness is very surprising to me. What if we're talking about a passport site, or an unemployment benefits site, and so on? Why is the onus on the user and not the developer?


There needs to be an API in the client for specifying how much CPU can be used. Clearly things like crypto mining in the bg are considered abuse but what about just poorly written js? Clients should be able to police the code they execute.


The choice is with the user. The onus is on the developer to build something good and efficient, otherwise people will not use the product.

If the user is forced to use a terrible government website (as we all are from time to time), removing JS will fix absolutely nothing there. If anything, it might make things worse, as there will be a bunch of things you simply cannot do without JS (e.g cropping a photo to upload to a passport site - how would you even draw out the crop region without JS?!)

JS is not the problem. Crappy developers and corrupt officials who award key contracts to crappy developers are the problem.


It's the ads that are the problem. A crappy website is sad but fixable. At least it provides some value.

Ads are are a nuisance race to the bottom.


Well said.


> Plus, we have a lot of processing power on the client side, which should be utilised.

That does not mean it should be utilised by you. Have some humility as a developer.


Depends on what you're doing. Everyone uses the word "website" like it's just one type of thing. You DON'T need JavaScript for a website that has no interactability, like a personal blog with no comments or reactions features. You DO need it for... well, almost anything else you want to let the end user do.

I think a lot of these "you don't need JS" things you see out there are either (a) a sensible reminder that your JS-filled app doesn't have to do EVERYTHING with JavaScript (looking at you, JSS), (b) a contrarian "JS sucks" take, (c) an appeal to use outdated tech like PHP instead, or (d) just a clever way of getting attention on sites like hackernews.


It’s funny you mention HN. With comments and voting and no js.


How do you think HN handles voting and collapsing threads without a page load?

It's this JavaScript that is loaded at the bottom of every page:

https://news.ycombinator.com/hn.js

There is a no-JS fallback. If you disable JS with uBlock Origin or whatever, then reload the page and try voting, you will see the page reload. But thread collapsing won't work at all.


My argument lately has been: Should we be allowing code to run inside of our documents? Am I okay with PDFs, jpegs, mp4s etc. running arbitrary code when I open them?



imo the extension is how we communicate the purpose of the contents: if it's a .js or .exe file, expect execution, and take appropriate precautions

while media _shouldn't_ run arbitrary code, there's also nothing stopping a malicious actor from crafting something that breaks through a buffer overflow in the .mp4 codec for example

the safest bet would probably be "assume anything can run arbitrary code, even if it's not supposed to"


> Plus, we have a lot of processing power on the client side, which should be utilised.

Preferrably in useful work. The fact that end user interfaces are sometimes slower than what they were in the late 90s suggests this is often not the case.


It's still nice to make your page compatible with users who have Javascript turned off by default (noscript/umatrix). That way you can still use ancient HTML forms for server-side interactivity.


I think PWAs are the future. The industry is just slow at getting to it.


It's not that the industry is slow. It's just hard to push PWAs forward when Apple is actively hostile toward web apps.


Still waiting on iOS Safari push notifications to actually work...


Why haven’t PWA’s displaced native Android apps? That can’t be Apple’s fault, can it?


The only people who truly want PWAs are web developers and product managers that do not prioritize their users.

Why learn anything new or spend anything extra when you can just ship substandard web crapware to users who have no choice (looking at you, Slack).


Isn't that like saying only big companies with specialized roles are allowed to provide app-like functionality?

Among others, smaller shops and single-person companies surely benefit from one delivery platform if you ask me. Even if it's not perfect.


I don't see how it's like saying that. Do you think a single person can develop a PWA but a single person can't develop an Android app?


> It’ll probably only work if you’re using Firefox

This sentence both made me think:

"Seriously; a page that only works in one browser?"

and

"Finally something that only works in Firefox, that'll teach those Chromers!"


Does the page actually work for anyone?

I'm on Firefox and as soon as I click the link from the blog, it's just forever loading.


Works for me on mobile Firefox 109.1.1


Works for me (FF 109.0)


Followup: It does work for me after all, but the server was way too overloaded to serve it promptly.


Worked for me on Windows 11 Firefox 109.0 64-bit


ff 50, linux, works.


Firefox 50 was released like a week after Trump was elected. They do have an LTS version if you just want stability: https://www.mozilla.org/en-US/firefox/enterprise/ Or it was just a typo I don't know sorry


I just don't have the option to upgrade here. :-/


Nothing loaded for me in Chrome. Just a blank page and the following warning in the console:

"DevTools failed to load source map: Could not load content for chrome-extension://[removed]/sourceMap/chrome/scripts/iframe_form_check.map: System error: net::ERR_BLOCKED_BY_CLIENT"

Which might not be related, not sure, and not awake enough to dig into it further. Did it not load properly for anyone else?


That error indicates something weird going on with one of your extensions. The guid tells you what extension is generating the error so you can disable it.

I don't know why your browser is blocking a request to open the source code for one of your extensions. Maybe an extension trying to hide its own code?

To find out what's causing this, disable all extensions and enabled them one by one until the problem occurs, I suppose. It'll be harder to find if your problem is caused by two different extensions, in which case you're going to need to test pairs...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: