Hacker News new | past | comments | ask | show | jobs | submit login

6) Sending data but staying on the current page.

You can make links or forms submit but not cause browser navigation by having your server respond with a 204 No Content.

I feel like this is a completely forgotten technique, but we used this a ton pre-AJAX for little one off things to shoot a message to the server without interruption.




That's really cool! I didn't know about this. Thanks for sharing.

How do you handle reflecting the new state in the UI though? Do you just use JavaScript anyway?

I have a form which looks like a toggle, and interacting with the toggle initiates a POST request. If the server responds with a 204, the UI doesn't actually update to reflect the new toggled state. I'm not sure how to work around this.


Could you do something like "<form action="#sent">" and use the ":target" CSS pseudo-class on an element with a message?


I don't think so. I think the form needs to still point to the URI which is expecting the POST request. Maybe I'm not imagining hard enough, but I don't think this approach will work.

Furthermore, according to this article[0], the `:target` approach is viable when it's acceptable to change the browser history, whereas in my case that is exactly what I'm trying to avoid.

[0]: https://css-tricks.com/css-target/


> Maybe I'm not imagining hard enough, but I don't think this approach will work.

I've never heard this turn of phrase before, but I really like it.


In my general use, often it was cases where you either didn’t need to or just some minor `:active` styling was enough.

We also did just use some JS to for instance hide the button. Our goal wasn't to avoid JS perse, just AJAX wasn't really a thing yet.


I can't think of a solution to this either, but there are also other ways of doing non-refreshing requests sending by toggling background-images, loading=lazy <img> and so on


There is the risk however of getting a bit too clever with this, and violating the principle of least astonishment.


From memory there was a time years ago when that's how voting on HN worked, the arrows where just links and returned a 204 no content response. I can't remember exactly but there may not even have been any feedback on the ui (vote count increase or arrows disappearing), although maybe that was done with JS (you could do it with just css now). Have distinct memories of the 204 responses when looking at how HN was made. It's no longer done like that though.


I wonder if that's because of "Show HN: This up votes itself" https://news.ycombinator.com/item?id=3742902


That’s the one, I remember that!


While neat, browser navigation isn't a bad thing on forms considering it is good to notify the user that the form was submitted. Otherwise, they will likely just keep hitting submit.


I actually really like using forms to submit to an iframe on the page. You can set the target attribute on a form to an id of iframe so on form submission, iframe gets reloaded.


Top-level-commenter here - I've actually got a little toy I wrote a while ago that uses motion-jpeg on an <input type="image"> to allow multiple users to draw on the video without using any JS.

It used to use HTTP 204's, but submitting the 204 made Firefox stop steaming the motion jpeg!? When I showed it off on HN in 2020, a PR was submitted that switched it to your iframe method which works smoothly in all browsers without JS.

https://github.com/donatj/imgboard


The problem with that is that the client isn't in control of the experience. What if there is a network error? What if there is a validation error? What if the request times out? What if the load balancer/proxy returns 5xx? In all these cases the browser will navigate away from the current page. It is trivial to handle all of this in a fetch() call.


Is that good, though? I can't count the number of times where the failure mode of a JS submit was an endless spinner, a confusing or ambiguous message that doesn't delineate between "invalid input" and ""unexpected server error", or worse, no indication of the failure at all.

As rough as the experience of being taken to an error page is, I think it's often both more obvious (for users) and easier to get right without thinking too hard about it (for developers).


When would you ever want this? There’s no feedback provided to the user that their submission was in progress or completed successfully.


You said it yourself. When you don't need feedback that their submission was in progress or completed successfully.

To be fair, on the second point, on an error, you can not return a 204, and take them somewhere telling them what went wrong.


This is useful! How does one learn "you can make links or forms submit but not cause browser navigation by having your server respond with a 204" without reading it in an offhand comment on HN?

I mean the status code can be looked up easily, but where would we have read about the "submit but don't cause nav" feature?


The http spec outlines that this was the intended behavior and MDN elaborates on implementation.

https://datatracker.ietf.org/doc/html/rfc7231#section-6.3.5

https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204


Wow! I like that.

What happens with a 40x or 50x? I guess with 40x there's potential to bring the user to a sign in page if that's the problem, but if there's a server error then it seems like the user may lose their form data and end up on a useless error page.

Using 204 seems like a good default, though, because it means I can still use forms with JS turned off but I can choose for the site to be progressively enhanced by JS if I turn it on.


That may be a stupid question, but how would the user know that the form was actually submitted?


As a user, I block such requests.

Why would I have any interest in making requests for "No Content". Perhaps this is why web developers use Javascript to trigger them, without any input from the user. How many users knowingly make such requests for no content, or otherwise send data, while expecting no visual acknowledgment/feedback that data is being or has been sent to a server.

There could be legitimate uses for this perhaps but the uses to "avoid detection by the user" outweigh the benefit of allowing it.


Wouldn't those requests be unblockable, as you don't know if it'll be a 204 until you get the response?


On the first time, I agree.

However once a 204 response is discovered in the logs the domain or URL can be blocked going forward.

I generally do not use a Javascript-capable browser nor enable Javascript when I am using a popular browser so JS-triggered requests for no content fail on account of no JS engine available. In cases outside the browser, e.g., checks to detect captive portals, I block them with a proxy.


How is this fooling the user? And why would you block something that's performing a desired action simply because of a certain response? That would be like saying that a server returning a 404 tricked you into thinking that a page existed, so the URL that returned it should be blocked.


It is not performing a desired action. For example, I have no need for making a request to https://www.youtube.com/generate_204 because I search, browse and download from YouTube via the command line, without using youtube_dl. Making that request, repeatedly, for no content does not benefit me in the slightest. Thus I do not make it. Why would I.

Other users may operate the computer using popular software running under default settings where, e.g., visiting a website automatically runs a number of Javascripts unseen by the user and the user implicitly trusts that, whatever these scripts are doing, it is necessary and for the user's benefit. We know that is not always true. (This blog post shows us a number of instances where JS is used unncessarily.)

There could certainly be legitimate uses for the non-JS 204 no content data sending technique. If I was using a website where this was useful, of course I would not block it. I am not yet aware of any such website among the ones I visit. Every use of requests for no content I have seen has been unnecessary. Usually it is some form of telemetry, sending data about user behaviour to a server without any prior user consent or affirmative action. I suffer no loss of benefits by not making or blocking these requests. For me, this is the most sensible approach. YMMV.


It sounds like your main problem is with telemetry, which you may be misidentifying solely based on the response, rather than with that particular response itself


The indicated use case is for non-JavaScript pages...


You can still use a form with 204 and have some js change some UI state based on the 204 status code...


I assumed the goal is to avoid use of Javascript, as suggested by the title.


Right, I think you don't need to have your whole form controlled by Javascript is already less JS


As a user, you're a silly goose.

There are thousands of reasons things would return a 204. Blocking that is just being obtuse.

There are lots of requests that return a 200 with no body as well. Do you block all AJAX requests? The user doesn't see those happening, but it's not abuse.


I am using a text-only browser with no Javascript engine. I also make HTTP requests with non-browser clients. Without Javascript, most if not all of those "thousands of reasons" do not work and in a text-only environment they accomplish nothing for me.

I do not block all AJAX-triggered HTTP requests. For the ones I might need to make in order to retrieve content, I make them without using Javasscript, outside the browser.

Being called names without repercussion always proves that HN policies do not apply to everyone. It also indicates there is no cogent counter argument to make. More name-calling please. :)


lol I called you a silly goose because it rhymed. Note the end of the three stanzas. Genuinely no insult intended.


None taken. I missed the poetic though. Nice work. :)

If HN commenters want to protest over when and how one user uses her own computers and her own network, I welcome the entertainment. I'm bored today so foolishly responding to every comment, no matter how stupid.


You block requests based on their response code? How's that work?


Works well.


Does it? What problems do you solve by doing it?


Yes. For me, it solves the problem of more annoying, superfluous HTTP requests on the local network.

There is often a privacy benefit, too, if the request are being used to send data to an entity that supports online advertising. In some cases, not making these requests avoids supporting the practice of unconsented user data collection, telemetry and online advertising.


I can but only imagine what kind of life derives annoyance from "annoying, superfluous HTTP requests on the local network".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: