Hacker News new | past | comments | ask | show | jobs | submit login
Wasm-service: Htmx, WebAssembly, Rust, ServiceWorker proof of concept (github.com/richardanaya)
216 points by richardanaya on Oct 17, 2022 | hide | past | favorite | 81 comments



The demo also demonstrates why something like this is insufficient: you can’t rely on the service worker loading. Service workers must be optional. There’s a reason invocations always start by checking if navigator.serviceWorker even exists, and why navigator.serviceWorker.register() returns a Promise, and why the ServiceWorkerRegistration type is comparatively complicated: service workers aren’t permitted in all contexts, and even when they are, they’re installed asynchronously in the background after the page finishes loading.

To show this easily, try this in a Firefox Private Browsing window: service workers are disabled there, so the request goes through to the server, which responds 405 Method Not Allowed.

So in order to make this approach reliable, you’ll need to run that code on your server, not just in the service worker. This is totally viable, but an approach that has seen surprisingly little experimentation. (I myself have a grand scheme that’s actually fairly fleshed out, but I’ve only touched it once in the last year and a half, so don’t hold your breath. My scheme is predicated upon aggressively not requiring client-side scripting, but using it only for completely optional enhancement; and upon running the same source code in a worker, in a service worker, at the edge or on the origin server, with only a teeny bit of JavaScript on the main thread.)


This is too negative - it's fine to require service workers if it's justified. Service workers work in private browsing mode in Safari, Chrome and Brave, and is only unimplemented in Firefox because it "hasn't been a priority yet":

https://bugzilla.mozilla.org/show_bug.cgi?id=1320796

Firefox will eventually catch up. The possibilities for this approach are actually quite interesting, and go far beyond the much less useful scheme you outlined that relegates it strictly to just being an optimization.


I used Firefox’s Private Browsing purely as an easy way of demonstrating the problem. A forced reload (e.g. Ctrl+Shift+R) is another way of showing it, and one that’s defined in the spec.

I’ll tweak my statement slightly: for regular site functionality, service workers must be optional. Depending on service workers for anything that does not inherently require them (like background sync or receiving push messages) is bad.


Again, this is much too sweeping and categorical. The forced reload problem with service workers is well known and easy to work around. The Firefox private browsing example is neither here nor there - an implementation weakness that has no good rationale and will eventually go away. If your functionality justifies it, you should feel free to use and depend on service workers. If service workers are not present, you should feel free to display an error explaining it to the user. Many things we commonly depend on are optional in corner cases, but that doesn't hold us back.


I think a better way to say it is that, for ordinary, general purpose websites, it's unlikely that such a complicated architecture would be in any way superior to a regular web page implemented using a more ordinary architecture, and there are downsides.

But that doesn't rule it out in some special case. Special cases are special. (I recently wrote a Chrome-only web app because it requires access to the serial port.)


I don't think this is significantly different from what I said ("if your functionality justifies it..."). That is, evaluate service workers in the same way that you would any other complicated additional requirement or feature.


I’m curious how you mean to work around the forced reload problem, because the way I’m looking at it, it’s impossible to work around, by design.

I say: feel free to use service workers, but do not feel free to depend on service workers. They must be optional in general-purpose websites.


Very few things are impossible to work around. In this case, you can detect that the service worker is no longer controlling the page, and do something about it: re-register the worker, soft reload the page, etc. I've also seen people do dodgier things like trying to intercept hard reloads, but that's not necessary.

At any rate, this is more of a StackOverflow question than an HN question, and the details of what works will differ from project to project.

https://stackoverflow.com/questions/51597231/register-servic...


> Depending on service workers for anything that does not inherently require them (like background sync or receiving push messages) is bad.

Or access to CacheStorage, and acting as an offline server when the network is unavailable. Perfectly reasonable to depend on if your use case requires it.


it's fine to require service workers if it's justified

I don't agree with this because you're essentially defending writing websites and apps for specific browsers. "Oh, this doesn't work in your browser because it requires [specific tech|devs to test in more than one browser|whatever]" is a lame excuse in a world where progressive enhancement has been an accepted approach to developing web software for twenty years.

As a tech demo to show something is possible it's fair enough, but in the real world this shouldn't ever happen.


In this example where Firefox is the outlier, if you write your web app differently than you otherwise would have then you'd be writing websites and apps for a specific browser...


Cool demo. Like with any PWA, it would be just very gret if all that stuff would be really progressive. Run in the browser for the 90% and render real HTML (that might even work with CSS) for the rest. IMHO frameworks that fallback to accessible HTML for server rendering in a deterministic fashion would be awesome. I personally find the argument that ist works for most not very ethically sustainable...


I RTFA first and couldn't get it working, came here and saw your comment and now know why it doesn't work.

I may be a minority weirdo, but I have set dom.serviceWorkers.enabled to false.

Why? Because legit sites aren't broken by this and I always clear all browsers cache and databases on close (several times per day). IMHO service workers are used mostly for long-lived tracking, to supercede the cookie, and I also dislike background processing for things I just don't care about.

So for me... service workers are disabled. They just don't work. And it turns out the web works just fine.


You read the article titled “… Service Workers…” and tried getting it working on a device, that you explicitly disabled Service Workers because of your strong opinion on them?

I’m… not sure what to say.


I've created an offline-first web app which is based on service workers. I've created another one that could be pushed to the back end (like on Node.js) that would be just a straight MPA app. I guess both of them could be pushed to the back end if needed. Since I just use them for myself I don't worry about them not working without JS enabled. But I created HTMF, similar to HTMX but made to be a progressive enhancement from the get-go.

https://github.com/jon49/Soccer

https://github.com/jon49/WeightTracker

https://github.com/jon49/MealPlanner

https://github.com/jon49/htmf


I'm more optimistic about using Service Workers in this way, but I agree this was definitely not their intended use case so there will be problems.

There's another flaw I see with using Service Workers in this way. After a period of time where the Service Worker is inactive (a minute or so on Firefox), the browser will shutdown the Service Worker. When it shuts down, all variables in the Service Worker scope are freed and I assume the WebAssembly server instance as well. How would you maintain server state, in this scenario?


I pretty much build my personal offline-first web apps this way, but with JS.

https://news.ycombinator.com/item?id=33319875


Cool! If I'm reading your code correctly, you store service worker state in the cache?


Yeah, I store everything in IndexedDB using a simple Key-Value abstraction library. I have a cache where I store any temporary state between pages.


Commit to a browser persistent storage or skip it off to a server.


That's what I'm thinking too. There's no way to keep the state in Rust which means constantly updating a browser persistent storage through JS or keeping state client side in the tab's environment. A pretty janky workaround


Firefox is running behind on a lot of the latest features. And considering their market share, sometimes it's just less effort to ignore its limitations.


What contexts are they not permitted? I know that not all browsers support them, but are there other limitations?


One particularly interesting context here is that force reloading deliberately bypasses service workers. Try pressing Shift+F5 or Ctrl+Shift+R, and all of a sudden the button stops working until you do a regular, non-forced reload.

Beyond that, I don’t know, but I would expect things like some lower-powered and -storaged devices rejecting service workers, and maybe privacy modes too, though in that case they could possibly be allowed with just background stuff neutered.


Given that the spec requires Service Workers to be optional, future Web specs & implementations may build upon that requirement, so writing your software to assume they always load is both brittle today and subject to unexpected breakage in future.


De jure they’re optional, but if they are implemented everywhere or close enough to everywhere, they will de facto become mandatory, as has happened many times before. This is a large part of the reason why a large part of me wishes browsers wouldn’t provide service workers in Private Browsing/Incognito/whatever modes.


I really don't understand your point. Since ServiceWorkers provide an offline experience and can be inspected from the client, they should be better than no ServiceWorker with respect to privacy.


The main purposes of service workers are offline support and persistent background stuff. These are typically irrelevant in Private Browsing windows, which are designed for short-term use, and so service workers are almost always pure waste. (Not always, but almost always, and in line with the intended purpose and semantics of Private Browsing windows, though you can certainly use it in somewhat different ways.)

It’s similar to a large part of the reason why Local Storage and Indexed DB were historically disabled in Private Browsing: why store stuff when it’s only going to be deleted immediately?

Browsers have mostly headed in the direction of removing distinguishing features from their Private Browsing modes, partly to thwart detection of that mode, and partly because many things started to carelessly depend on the gated functionality. Firefox still doesn’t allow writing to Indexed DB (and only made Local Storage work far later than Chromium), and I’ve definitely encountered sites that just assumed it would work, and so fell over in early initialisation.

And so for this specific subthread: it’s ossification at work. If something is supposed to be able to change but never actually changes, you find when you try to change it that you can’t do so any more because people have built too much stuff that assumed it wouldn’t change.


I don't agree that private browsing is for short-term use. You can use a temporary storage for how long it is needed (in my case up until the operating system or the browser crashes, aka almost never).


It is designed for that.


Where can I find such design specification? All I can find is https://w3ctag.github.io/private-mode/ and https://www.w3.org/2001/tag/doc/private-browsing-modes/ that tell nothing about the short duration of a private session. Some emphasis is put on the necessity that private mode and normal mode should be transparent to the user (i.e. not breaking any normal feature). It should be not possible to tell the two modes apart. If you break a feature like ServiceWorker you are providing a way to detect a private browsing session, and that is a bad thing, worse than anything a SW could provide, in my opinion.


Not sure i really get it.

So we used to have react/vue/whatever. You click button, and the front end computes the new DOM. No server needed.

Then, we decide to use htmx, where the server actually computes the new DOM elements, and the client is just a dumb client that displays whatever the server gives.

And now, we give the users browser all of the information to act like a server, intercept the call, and compute what the server would have responded with (saving since network packets and lag).

Did I understand that correctly?

And yea, I also saw that talk about HTMX on HN yesterday... didn't really understand the advantage then either.


Man in my 8 years of development

I have seen the transition away from server side to SPA, from SPA back to server side. Now people are doing server side in the browser. This industry will never become boring if we keep continuing those cycles.


Creating MPAs with HTMX is simpler than the SPA approach and makes it so you don't need a "monolith" on the front end.

For personal offline first web apps I build I do it this way, except with JS:

https://news.ycombinator.com/item?id=33319875


I had the same reaction, maybe this new method of updating the front is more "accessibility friendly"


What a coincidence, I was just discussing on discord a similar approach for our Rust web framework submillisecond[0].

Submillisecond uses lunatic to run Rust code compiled to WebAssembly on the backend. We are working on a LiveView-like library now. And one thing I would love to give developers for free is an offline-first experience. You write everything in Rust, compile it to WebAssembly, run it as a regular backend on lunatic, but also allow for moving the whole server into the browser for a offline experience. If SQLite is used for the DB, it could also potentially run in the browser.

This doesn't need to move the whole app into the browser, but could do so just for more latency sensitive workloads that don't fit LiveView well. Like form validation on every keypress, etc.

[0]: https://github.com/lunatic-solutions/submillisecond


I write my apps like this for my offline first web apps. I thought about this problem for a while think it would be cool to do a totally progressively enhanced experience from no JS all the way to being offline with no need for the back end at all (except for syncing data).

Another option would be, instead of putting SQLite on the front is to use a repo pattern and if it is on the front end use IndexedDB, if it on the back end use SQLite or some other custom implementation for their repo.

https://news.ycombinator.com/item?id=33319875


Saw HTMX on hacker news the other day, thought I’d do this interesting experiment.


HTMX is very interesting, I saw it presented at DjangoCon.

Being involved in a lot of SPA's these days I can't help think that we spend way too much time building complex frontends and managing state.

Anyone feel the same way? HTMX feels fresh.


Quite a potential. I hope it wont suffer from performance penalty or something.


Any takeaways? Is it a promising direction? Would it be viable to create an “isomorphic rust” web app?


I’m very interested in isometric Rust apps. The widespread support for serialization makes it really nice for writing distributed systems that send arbitrarily complex data structures over the wire, which is where JS becomes a pain.


How would you maintain state in the service worker when is shutdown by the browser after being idle for a period of time? Or does Htmx not keep state on the server side in the first place?


What's the point, though? HTMX or any other SGMLish angle-bracket markup template engine is for content creators/authors, not necessarily developers. Sure you can play connect-the-dots and combine technologies picked up from the HN front page, but if you're using heavyweight developer pipelines such as Rust and WASM with service workers anyway, that's already way out of reach for non-developers, and comes with full developer responsibilities for testing, security, maintenance, build systems, versions, dependencies, and whatnot.


Do you know at all what htmx does? Your description of it as an "SGMLish angle-bracket markul template engine" suggests you have absolutely no idea what you're talking about.


Different tools aren't for certain types of people. They're for different purposes.


While this is the least important part of this project: Htmx is great! I've used it quite extensively lately for internal tools at work. AlpineJS has also been useful for things that need a little more JSON-API-driven oomph.


Intersting. How did you work around the issue that HTMX takes snapshots of the altered DOM (by Alpinejs) for history navigation [0]? This is the biggest issue holding me back from using the two together.

[0] https://github.com/bigskysoftware/htmx/issues/1015


Basically by just being careful: because these are purely rough and ready internal tools, I can be a bit picky about what gets used where, and why. For the most part I don’t have much history navigation at all, let’s me cheat :)


What's the advantage of using hx-boost? Just slightly smoother transitions?


With `hx-boost`, you can progressively enhance your server-rendered website to an SPA, just like with hotwire/turbo [0] or swup [1] (which I'm using a lot these days). This technique can come with a few other benefits apart from nicer page transitions, like preloading links on mouseenter and caching of visited pages.

[0] https://turbo.hotwired.dev/ [1] https://swup.js.org/


So it's mainly used for optimizations, and it's not a critical architectural tool for building a system. I'm looking at using htmx and Alpine in a project so my takeaway is just to avoid hx-boost and rely on the built-in browser navigation.


Interesting. So a front end could utilize HTMX to do dynamic client side rendering without javascript, limiting what goes server-side to only the things the server needs. Sounds very promising.

The code looks a bit complicated though. Some explanation of what’s going on would be helpful.


There is no "without javascript" here. There is a javascript library linked to your html and it then modifies / supplements behavior of the DOM elements. Javascript / DOM / HTML / CSS is super dynamic and flexible combo that can do what looks like a miracle to unfamiliar.


It does however raise an interesting question - can we ship the "server" part of a SSR rendered app as a WASM blob?

So you have a client side app that is 100% static HTML + WASM service worker in the browser, making calls off to some external API as it needs data but otherwise generating static HTML in the browser blob.

I wonder how much slower or faster this would be than a JavaScript heavy UI with traditional server side rendering. You would at least remove the RTT latency


> "can we ship the "server" part of a SSR rendered app as a WASM blob?"

What problem would that solve, and how is that any different than how websites already work today?

There's going to be server-side business logic that can't be "moved" to execute over on the client-side, for security/architectural reasons. On the client, websites already today use JS/WASM.


I mean eliminate client side JavaScript. Move all logic to wasm (making API calls to the backend server for raw data as needed) and have the WASM emit plain HTML and the absolute minimum of js to wire up the service worker and intercepts. No virtual dom, no components, no jsx etc - just plain HTML from the WASM blob.

I have no idea if this would be faster or not than the typical SPA type approach we have with react et al. For simple pages probably not much difference, but for large complex pages ...who knows.

E.g. imagine if you used emscripten to compile PHP or RoR or something into a WASM blob and have a service worker intercept the calls and have the wasm blob "serve" the static HTML without ever leaving the browser (apart from API calls to the server for e.g. database access). Replace PHP/RoR with your server-side thing of choice (homebrew or otherwise)

Bit of a thought experiment I suppose. Not a serious thing but kinda interesting.


> I mean eliminate client side JavaScript. Move all logic to wasm (making API calls to the backend server for raw data as needed) and have the WASM emit plain HTML and the absolute minimum of js to wire up the service worker and intercepts. No virtual dom, no components, no jsx etc - just plain HTML from the WASM blob.

How is this different from or better than current WASM-based frameworks?


There is a nice Rust-only web framework already - yew.

One of the examples https://github.com/security-union/yew-beyond-hello-world



I was actually wondering when fake service worker servers would take web dev by storm and become the preferred way for offline-first web apps. Go to /take-census-with-spotty-connection, a "web app" bundled into a service worker gives a form, submit it and the web app saves it into in-browser database. I imagined it as a pure js solution though.

After a Django+React project demanded 24 hour days from my team, I achieved the insight that with pure Django pages, you call (request) using strings containing function name and parameters like `/articles/page/2` and get an output (response) and the runtime (browser) can memoize (cache) the result since the whole process is... functional. Some of my react pages had a dozen ways of reaching illegal states, many caused by network calls. The former became the ideal to strive for. Hence why I think fake web servers via service workers will be popular for bug free (heh) offline first apps.



Haha this is fantastic! Turn something that is insistently server side into something client side. Plus it's using cool tech.


Is using a ServiceWorker required here? I'm not familiar with them yet, maybe someone can explain if the same can be achieved with htmx its `beforeRequest` event (https://htmx.org/events/#htmx:beforeRequest), and cancelling the request after running the wasm code?

I was considering using htmx + wasm for my website. Combined with my one-sqlite-db-per-user infrastructure it may enable me to do some fancy edge computing :)

Glad to see a PoC showing that the htmx + webassembly part of it is possible!


I’ve done similar experiments compiling a Rust server to WASM and running it in the browser. I didn’t use HTMX, but it could easily be added.

https://github.com/logankeenan/notes-demo-spa

https://github.com/logankeenan/axum-browser


This is clever.

Can we render on server , then upgrade to render on client progressively?

Also can we load other wasm using server push in the background . You get runtime plugins then


Now that we’re heading back to dumping html over the wire I look forward to the reinvigoration of the XSS attack.


I didn't read the article (I know, I know), but I think it would be really cool to get to run other languages in the frontend. Javascript just plain sucks. Having something like Kotlin or golang (in a reduced form) would be really neat and could professionalize the frontend even more. I think the frontend is always a bit wild west, compared to most backend implementations. That's not only due to more testing but also stronger language guarantees.


Kotlin transpiles to JavaScript natively already, and you can run Go in the browser by targeting WASM

- https://kotlinlang.org/docs/js-overview.html

- https://github.com/vugu/vugu


I know that you can transpile, but in my experience with other languages (clojure + python) that transpile the compilers guarantees were kinda week. Do you have experience with any of the two?


Kotlin JS transpilation is very solid

It has tools both to emit TypeScript definitions from Kotlin, and to generate Kotlin types from TS libraries ("Dukat" tool)

Go I do not have much experience with unfortunately


If you read the article you might learn something about WebAssembly which will do the things you're talking about.


> Javascript just plain sucks.

Can you say why? I don't agree/blanket statements like this seem sorta silly.


Do we really have to have this discussion after it has been discussed a thousand times? That you need something like TypeScript is a telltale sign. You also can do weird shit with it, that doesn't make any sense. That's documented well enough imho. Javascript gives almost no language-enforced guarantees, which causes bugs. It's such a simple thing to have compiler guarantees but it has so much effect. For example when switching from java to kotlin.

PS: I also mentioned language guarantees in my parent post ;)


Which is faster, v8 javascript or webassembly + rust?


So, for your webpage to start it has first to download a 1.74Mb component?

Just because "it is cool"?


It's a proof of concept, and yes, it is cool. HN can be so critical sometimes, sheesh.


Imagine a line of business web app, where once you are logged in and use the app it downloads this. How is this much different from downloading a desktop app that needs to download a few meg of libraries.

It definitely is not needed for blogs, wikis, news sites etc. This would be for web applications.


Not because "it is cool" but because it needs that code to operate. Consider this scenario, even for a classic website/blog: you can eagerly download all of the textual content at first load as markdown, cache it in the ServiceWorker together with a markdown parser, and go offline. You can then ask the network, if available, only for downloading images that then get added to the offline cache. Even with a spotty and slow mobile connection, after the initial load you are done with the network and you can read all the content in an efficient, environmental friendly and fast way. When the network is available, you can check if there are updates by sending a small request to the server, and update the cache.


lol, I thought since it was Rust that it would be like 100k or something. I think C# has it's WASM package down to 1MB or so. The implementation I did with just JS is about 50k or so, total for the app and everything. That is a bit disappointing.


Have you seen how much you need to download to make Figma work? Not saying that every page should be like that but...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: