I have a lot of respect for intercooler and wish something like it had become the go-to, with more complicated things like React reserved for apps that really need it.
I'm curious why you are starting over. What is different from intercooler besides no need for jQuery? Why HTMX instead of Intercooler 2.0?
I wanted the freedom to completely reimplement things and drop ideas that didn't work out in intercooler
I also wanted to stress that htmx isn't just another javascript library, competing with react and the rest. It is focused on HTML and extending HTML to make it a powerful and complete hypertext. I think that the name htmx captures that idea pretty well.
The fact that intercooler didn't have an extension mechanism meant I had to heap a lot of stuff into the core (e.g. ic-action) that was interesting and useful, but made the code base messy and unfocused.
htmx has an extension mechanism, so you can pull in stuff like this (or write your own extension):
I was a fan of intercooler and used it in production more than once but for some reason I could never get my head round the dependency functionality and ended up just ignoring it.
Intercooler works beautifully alongside Django and Django Rest Framework and I look forward to trying out htmx.
This paradigm revolves around sending validation requests to server-side endpoints, is there any reason to put that load on the server rather than validate it on the page prior to submit with JS? Or scalability issues?
Not being antagonistic, genuinely curious because I've not seen inline validation done this way.
I'm not dogmatic about it. Some validation has to be done on the server side (e.g. unique emails) and any validation has to be redone on the server anyway since the client side isn't a trusted computing environment, but I can see doing both.
Ahh got it, yeah in this case that makes a ton of sense. No way to know with emails about uniqueness constraints until it hits the backend anyways.
I've been playing around with local demo, is it possible through extensions to make a wrapper that can perhaps compose GraphQL query inputs from forms?
I should have rephrased it haha, I more meant the act of sending each set of keystrokes validation as a server-side request. No doubt that the backend is going to be the ultimate judge when "submit" is pressed.
Also this extension API is pretty straightforward, the element reference and parameters are right in the args so it's an easy mapping to GraphQL query strings from there. Awesome!
I was able to get a minimal version working using just this (with json-enc extension patch):
The idea behind libs like htmx is that you get to have some of the niceties of a SPA, without writing any JS and requiring minimal computational power from the client.
What about accessibility? It'd be great to see the docs and examples put more focus on accessibility, both for non-JS clients and for clients that may be better served by not replacing elements, like screen readers.
And, based on some of the comments here, it seems like some people may have forgotten how to do anything that doesn't depend on AJAX. ;-)
htmx is a lot more tightly focused. With intercooler I was trying to replace all javascript everywhere, but with htmx I'm really trying to focus in on the request/response stuff and make it an extensible base for other things.
I am planning on supporting both intercooler and htmx for the forseeable future. I have a large app written in intercooler and I'm not porting it over any time soon. If I were starting a new project I'd use htmx, but then I know the developer pretty well if there are problems. :)
Well, I'd like to buff the test suite out, and I really don't know if I got the web socket API right. Using HTML w/ ajax is natural enough, but using it with web sockets? We'll see.
I guess the hardest thing is getting the word out, but thankfully oftenwrong has done that for me. :)
There is a "Server Requests" slide-up at the bottom of the page. If you expand it it has a nice interface to show requests and responses that are "happening" behind the scenes.
intercooler looked cool. never got to try it due to jquery dependency. but will definitely give htmx a try. my current project will either be htmx or alpine.js.
Thank you for your work, in preventing SPA bloat ie providing an alternative
htmx looks like the perfect thing between UnpolyJS and AlpineJS. In Unpoly I miss the optional clientside templating which htmx seems to support as extension. AlpineJS lacks the http stuff (headers, requests). Really exciting times for using old school html without downloading 5000 js files via npm.
This looks amazing -- seems like you can accomplish a significant amount of what constitutes front-end work these days with htmx. The examples are compelling:
There is a slight deal-breaker for me. Much of the functionality revolves around hx-swap'ing i.e. writing the contents of a response as HTML into/around tags. This requires the server-side to return HTML instead of JSON. From the docs:
"Note that when you are using htmx, on the server side you respond with HTML, not JSON. This keeps you firmly within the original web programming model, using Hypertext As The Engine Of Application State without even needing to really understand that concept."
I would love to use this with existing REST backends, but most are only able to return JSON. How does one use this for AJAX without rewriting existing backends?
This is great, I am definitely considering using this for some of my side projects so I don't spend hours with JS just to display some JSON output as HTML.
Yes, but you shouldn't use it as a simple drop-in. The minified version via CDN is like 1.7 MB ... at least they mention you should not use it in the docs and instead configure it to your needs.
Indeed, using tailwind without some build pipeline is not recommended. You'd use something like PurgeCSS to remove all unused css classes. Which admittedly kind of defeats the whole idea of keeping it simple.
It defines css classes for most css properties. Their website [1] explains why you might want that pretty well. Regarding file size, the idea is that you use something like PurgeCSS in your build pipeline, which removes all unused css classes. A bit like tree shaking for css.
id imagine that you get a lot of rules when you have a whole bunch of different "margin-top-<x>". the idea is probably to rely on some CSS tree-shaking/"dead CSS elimination" tool to only keep the ones you need
On a related note, I recently discovered that when you use semantically styled HTML, the difference between a JSON payload and an HTML payload is almost negligible (basically, the close tags). Why not transmit in a format that the browser already understands natively?
yeah, the payload is almost zero difference, a round off error compared w/ connection latency
a lot of folks conflate AJAX w/ JSON apis, and can't easily imagine an endpoint returning partial bits of HTML that have nothing to do with a public JSON API
worse, people have been misled to believe that JSON APIs are REST-ful
Because oftentimes you’re retrieving data that will be stored in one central store and displayed / used in different ways in various components. Eg I might fetch user data and sometimes display their name, maybe another time compute their age from their birthdate, etc...
If you take a few minutes to build seperate endpoints, you'd get better performance. Both in latency/bandwidth and in system resources for your DB query.
What's the difference between building seperate code paths on the client vs. 2 server endpoints though? Very few scenarios can't afford the behind-the-scenes extra http round trip...
Because I don't want to build and maintain an API endpoint to give me an HTML blob for the users age, and another one for their full name, and another one just for their first name, and another one for figuring out what day of the year is their birthday. I want adding these features to my UI to be a client-side-only change.
Probably fine in some use cases. But another reason on top of what's already been said: Because you might be sending the data back to a non-browser client such as a mobile application.
Sending back HTML is kind of the point though. The idea is lots of sites built on frameworks (Rails, .NET, Java) already do HTML rendering server-side, so now you can get SPA-like UI without backend changes. If you've got a REST backend (which the former generally don't) I would use one of the many popular client side frameworks. Using HTMX or intercooler or unpoly, doesn't make a lot of sense in this scenario.
I think that even if a server returns HTML, on the client-side we'd typically like to retain the flexibility to style/restyle/rearrange elements it to fit the current layout. The same response could be used in multiple places with different styles.
What other client-side UI frameworks do you recommend? I'm liking what I'm seeing with HTMX due to its small size and simplicity.
This is exactly the use-case for Content-Type and Accept headers. The client and server negotiates what type of content they can receive/send, so if you're sending application/json as the Accept header, you can only handle application/json, if you only send text/html in the header, then only HTML. If the client understands both, you do application/json,text/html. with the order signalling preference.
Any serious framework would be able to handle this without any hiccups.
THat's how you negotiate the content type; the original question was how they have a service returning json return html. I've never heard of a framework that can arbitrarily convert data into HTML - how would that even work?
I was part of a project that when I just joined it, RESTful APIs together with fat frontend clients was becoming the hot-new-thing. The application was originally returning text/html (templates) to be injected in various part of the frontend, at request-time.
To migrate to our flashy new Angular V1 application, we added a application/json content-type that instead of returning the template, returned the data that the template used. So we could endpoint-by-endpoint migrate to the SPA we were building.
Was simply a matter of making the data required for the views a bit more advertised internally, so the controllers could instead not render the views when the content-type is json.
You could dynamically create HTML that mirrored a JSON response. You'd just have to have to walk the JSON structure and create a structure for where the key and the value from the JSON end up. Each node could just be a <div>. The ID could be a composition of the key and the parent nodes/keys so that it'd be unique. You could also stuff both values in a data-* attribute.
It wouldn't be the prettiest HTML but you could work with it via CSS and JS.
For sure. Most of the backends we write only return JSON. If you request HTML or any other content-type, we return nothing. We just don't implement it.
If you write your backend from scratch and only intend for it to be used only with your web front-end, you can make it return bits of HTML (I have several applications in jQuery that work like this -- the value of the response just gets inserted via ("#id").html(val)).
Our current backends are written to support multiple generic consumers of which a web front-end is only one.
Sure but it’s extra engineering time (tiny per instance yes but multiplied over many legacy apps with no value add — not an easy business case to make just to use a html based front end library.)
Much easier to use client-side templating as suggested by the author.
Off topic, but that's a shame about them recommending not to prefix X- for custom headers.
Their entire reason (Appendix B) is that "well, if the header eventually becomes a standard header, then there will be old apps that will only work with X-, and we'll have to keep the X- version around forever!" That seems like a very weak reason to me. Most custom headers are not going to become standard headers, and those that do, well, apps can update nowadays and we deprecate old web features all the time.
I see a lot of value in distinguishing non-standard headers as a matter of principle. Also, there's a little bit of classic charm to X-, it's almost a shibboleth of HTTP. Removing it feels as wrong to me as removing the :// from the URI protocol and saying "we really just need :/"
I agree there. Browsers started to have this issue to, but to fix it they actually got more aggressive about deprecating prefixed stuff. Now if you use -moz-foo, they guarantee that it will break at some point in the future.
I'm not sure about changing the headers. X- is a widely accepted convention. In the RFC also says this:
> SHOULD NOT prefix their parameter names with "X-" or similar constructs.
It doesn't spell out exactly what would be considered a "similar construct", but I think "HX-" might count as a similar construct.
There's no good way to implement the feature you want that would satisfy all reasonable interpretations of that RFC, so switching to "HX-" would be a half-measure.
I understood exactly what was meant by X-HX- and I think it's a good design.
Still, HX- is shorter and about as intuitive as X-HX- so I think it might still be the way to go, even though IMO that RFC isn't a good enough reason to change it.
I think the spirit of the RFC is mostly about complications arising if such headers ever became standardized, but also that the "X-" just doesn't add any value – it's a convention without a purpose and just makes headers longer. In fact, it arguably becomes less useful the more people use it.
So I think it's better for new projects to drop the "X-".
On the other hand, I think it's reasonable to prepend with "HX-" because these headers apply to functionality that is specifically related to the capabilities of this library (or similar libs). It is highly improbable that this syntax would ever become standardized in either the HTML or HTTP specs, but if it did, the HX in front of headers makes sense given that the tag attributes are also prepended with an HX.
From an entirely stylistic perspective, I also think that shoving an "X-" in front of all your headers is ugly (ugliness can of course be justified by usefulness, but as stated above I don't think that applies here).
But of course these are just my thoughts on the matter, others might like "X-", though I'll admit I don't know why you would.
Fair point – though some might say if you start over in a new repo then you have reset the clock ;). In all honestly though, wasn't trying to imply that this project ripped off Unpoly or anything, just wanted to mention a similar project. Perhaps poor phrasing on my part.
As to a PR for the headers – I might just take you up on that!
I read nothing but good intent in your comment, so this is not me throwing stones, but one comment I would make is that I think using the term "prior art" - coming as it does from patent law - carries an overtone of "I am implying that you might have ripped off someone else's idea".
It's clear that this is not what you intended, but I think from the response from the creator, that was the tone that you accidentally conveyed.
it's all good, and please: if there is a better standard for me to follow, now is the time to implement it. You deserve the credit for bringing it to my attention, and the code base is pretty easy to navigate.
My upvote for Unpoly. Used it in severa projects and it's awesome. Only drawback I've found so far is that the source is CoffeScript, so it might be a bit harder to read the internals when necessary.
But In my opinion it gets a lot of things right, specially around form submissions, validation, error handling, modals, history, navigation, passive updates, etc. It is like Turbolinks++.
Unpoly examples wrecked my back button. Had to click back like 15 times to get back to HN... Is this a permanent issue because of the way it’s implemented?
I wish some of this ( or All of it ) are implemented inside browser and no longer need any Javascript to function.
Web App still has its place. But 90% of the web are Web Pages or Interactive Web Page, not Apps. While Every time I point this out there will be someone stating Gmail as an example of Web Apps, but we will soon have Hey.com to prove it doesn't need to be that way.
However browser vendors and standards body ( Which really is just browser vendors ) has far too much interest in making the Web or Web Browser as another OS / Platform. Rather than optimising for the 90% of our current use case.
I've been mentioning this together with a very slimmed down html5 for a while now:
- make something close to an html5 equivalent to asm.js: remove all ambiguous variants, everything we know is slow.
- "ban" all Javascript except some small pre-defined libraries like this.
- make a catchy name (html-core? web-core?) and a validator for it. Call it a standard.
It will be fast in all browsers, maybe really fast in browsers that care to optimize for it.
If it becomes a thing we can create new simpler pure web browsers (as opposed to todays application platforms that we will then start calling old or legacy html :-)
Apart from Apple who stand to gain the most from killing off web apps. And Google who are busy trying to make the web redundant (with AMP and much of their other stack).
It's actually web app developers doing the most to push the web in the direction of web apps.
You are comparing apple to oranges :)
alpine and stimulus are frameworks (doing all sorts of stuff: css animation, data <-> objects, etc) while htmx is a library focusing solely on ajax.
From your list, the only similar project to htmx is intercooler-js, but... htmx is actually the new version of intercooler-js (both projects by the same author).
htmx supports animations too, and it has extensions to add further functionality. Of course those projects differ in their functional range, but they share the way they work; by adding attributes to HTML.
Yes, you are right. My response was based on the presentation from the first page: "htmx allows you to access AJAX, WebSockets and Server Sent Events directly in HTML, using attributes". In the meantime I read the documentation and, as you just said, it supports animations and much more.
The js via html tags is something I see also in Alpine.js, which has gotten a lot of press lately.
Is this an alternative, or is each really its own niche (and you might use both)? Is there a comparison?
Edit: It seems that htmx is almost entirely around AJAX, and Alpine around a) binding element data to objects and b) css & animations (such as hiding a popup). It would make a lot of sense to use them together (and the hx- even complements the x-). Is that correct?
I wish Intercooler and htmx were more popular. But I think that practical examples, copy and pastable code, youtube tuts are keys for a library to succeed. Think about php manual, react... There's often no need for some websites to bring big frameworks but they're well documented. That's the new nobody ever got fired for buying IBM.
On the chance that the authors are here: on https://htmx.org/docs , there is a link "original web programming model" where the target is (for some reason) surrounded by parentheses, so that it points to the wrong location.
Very cool. This seems very useful for lightweight UIs that need a small amount of interactivity, but not enough that a more heavyweight framework like React would make sense. Am I on the right track?
I like it, though I guess the "implementation" for what JavaScript gets executed on a given custom HTML attribute set is "hardcoded" in the htmx lib. With SGML, OTOH, you can have your own replacement content (JavaScript, other HTML, or whatever; it's just syntactical replacement). Would be an interesting experiment to implement the htmx vocabulary on top of SGML.
The example on the front page already hints at the type of bad programming this may encourage. When clicked, the button sends a POST (this seems ok), and then the backend sends new html for the button. The idea of having random snippets of your frontend markup being returned from api servers seems questionable.
Yeah, this is something you are going to have to wrap your head around and chew on for a bit. htmx is an extension of HTML (a generalization of it) rather than something you need to shoehorn into a JSON API-oriented world view.
It's just a different way of building web applications, much closer to the original web model. You can do a lot with it, with a lot less complexity in many cases.
I wrote some blog posts about this stuff for intercooler back in the day:
Woah I think this is misinformed on a very fundamental level, and I really would like to clarify where this idea is coming from (is it from a "modern" React or other JS-first bootcamp line of thought by chance? Not trolling, but genuinely interested). The entire point of the original web is that you send HTML to a browser rather than JS snippets or JSON payloads, and that you compose your documents from HTML fragments. "API servers" and RPC were concepts from before HTML, and the web as an app platform was welcomed for leaving granular request/response roundtrips behind and compose accessible and searchable HTML pages with really simple means (like shell scripts) on the server in the first place!
I've built web apps that didn't need much interactivity just using a regular ol vanilla web server sending html and then bootstrap.js for the menus, modals, etc.
When a client wants the interactivity of a full-on application, I would use an application framework at that point.
Using a system where the server needs to send back snippets of html for transitory states seems like you'd need to build yourself kind of a framework to handle the mechanics of that on the backend anyway. I'm not sure what the point is then. Sure, there is an asset in your project that is "just html" (the first loaded page), but you're going to need snippets of the interface's various states in different files so that the server can send them back.
It's like this framework is pursuing an aesthetic goal of "just html" which breaks when you actually try to use it for the stuff that you'd use a framework for.
I think you're saying that having html returned from the backend "seems questionable" and is a "type of bad programming".
But that can't be right because that's mostly how the web works? Make a request to a server, server returns html.
Is your issue that it's partial "random" content? How does that make it worse? It's a tried and true solution. The hamburger menu in amazon.com does just that.
I agree that the objection doesn't hold up given how many websites are made, but I think they mean that a REST API shouldn't be returning HTML, which is true. Unfortunately using the replace feature would mean rewriting existing APIs.
I don't think the idea of sending 'random' snippets of HTML to a browser and injecting them in to the DOM is inherently bad, and loads of frameworks have done exactly that in the past, but I would say that it does mean you have to be extremely good at managing your CSS. If something can be dropped in to the DOM anywhere where there's an hx attribute that could make managing the cascade specificity pretty tricky.
There is an extension (which is part of the main repo) that supports using front-end templates to consume JSON from a back-end. Looks just as simple as the replace feature. See: https://htmx.org/extensions/client-side-templates/
How do you test applications written with Htmx? Does it all need to be ui test harness (running in a browser), or can you unit test components somehow?
I'm happy you noted the test coverage for the library itself is reaching your standards. That's a great sign.
Does it only send data? If yes, I don't really see a clear use-case for it, as you normally either want zero JS and just use plain HTML forms posting or you have a dynamic interface and you use a lot of JS to update data on the page.
I think they meant 'can you do stuff other than sending data' - other things that you'd use javascript for that is not sending data/ajax. Validation and stuff. I'm writing some extensions in what is essentially a 'local SPA' and I'd love to get rid of the inline onclick handlers and all their crufty syntax (but my last real experience working on the web was 20 years ago so my perspective is probably objectively primitive anyway).
Not to send data, but receive data and update the UI accordingly. I looked a bit through their examples and it seems that they expose a way to update the UI with the response from the AJAX calls, but probably server has to return the entire HTML code?
One of the problems I see is that now anyone that was using intercooler is using an outdated and quite likely to become unmaintained library and be forced to migrate/rewrite things.
Yes, I wish the author will offer a smooth upgrade path for the existing intercooler.js users, because my heard wants such a thing to be adopted by people. I like the concept very much!
Would have had the same problem if the rewrite was called 2.0 instead of htmx. Hmmm, reminds me of the Angular 1 vs 2+ conumdrum (for which many said renaming would've been better).
It reminds me more of jQuery. And while I see how it's great for quick prototyping, it's going to be a nightmare in a large project. I really like React, where data and HTML are clearly separate. I don't want to send raw HTML back and forth, I want to send only data.
How much work / code is required in order to make this IE compatible? I ask, since the project is young, it would seem like a good time to make the hard choice to not support IE.
I'm a proponent of the natural web, where the focus of web development lies with W3C compliant HTML & CSS sprinkled with very small amounts of JavaScript (if necessary). Any JS frameworks that requires the user to modify their HTML to include non-W3C complient HTML attributes is a very poor architectural decision and should be avoided. Such frameworks include Angular. Unfortunately, your framework also falls into this category.
"Can't be run against a real server" sounds like a straw man you just made up? I'm astonished about how much you believe you can ascertain based on this... It's a demo, mocking a server is completely fine and also allow them to show it working without relying on other things.
I mean, this is actually even better. You can expand the mock-server down below and see what's happening.
Would you buy a car if when you took it for a test drive, it had no engine but instead the salesman sat next to you and made "vrrm vrrm" noises for 20 minutes?
Using the library is an investment of time, and time is worth money, so yes.
Reducing server load makes sense. Use a CDN or proxy, and aggressively cache responses.
Adding an entirely faked backend response, so that the examples don't actually show the real traffic it would generate is both adding complexity to the demo, and potentially giving a misleading picture of how usable the library is: for all we know, it's too slow to use as the examples show, because a real HTTP request to a backend is going to be slower than a javascript function returning a pre-determined string.
If instead the salesman took you and a car that included an engine out on a test track, would you not buy the car because the test track was fake? You are not buying the test track so you shouldnt care. You are not buying htmx's demo backend either so you shouldnt care.
In your analogy the 'test track' is still close enough to a real road - just as a functional backend providing fake data is real enough to test the functionality.
I just released 0.0.4, so htmx is still very young, but it's got a decent test suite: https://htmx.org/test/0.0.4/test/
there is a nice extension mechanism:
https://htmx.org/extensions/
and some very rough docs on how to pull off pure HTML animations:
https://htmx.org/examples/animations/
happy to answer questions