Either way, I'm receptive to the "LiveView" model, but until such a time a server can deliver both HTML and server driven native UI for mobile interchangeably, I prefer the RPC approach.
I don't want to develop:
1. Both an API for mobile and LiveView for web.
OR
2. Create my own server driven UI paradigm for mobile.
I find maintaining one architectural pattern simpler. I do want to support users with the best technology fit, but I don't see why I have to make this tradeoff. The platform holders are (and always have been) jerking us around.
I want to see a LiveView that can deliver both HTML and equivalent native UI markup. This is needed to sell the vision end to end - the world is not just web.
I feel like this would enable a more sensible choice:
Offline (first)? Use RPC with sync.
Online-only? Use the LiveView paradigm and it'll work with native or web.
> until such a time a server can deliver both HTML and server driven native UI for mobile interchangeably, I prefer the RPC approach.
I really think this is such a key point and is the main blocker to this sort of architecture for cases where you need to support all the platforms.
That said there are a lot of web applications that just need some form of UI and don't need full multi-platform support and LiveView type systems involve many fewer pieces to get going with. I'm thinking more here about company internal tooling for whatever purpose, rather than web services provided to customers which are more likely to need mobile apps.
Also, let's say you've already created your backend in Elixir (e.g. using Phoenix for GraphQL or JSON), and have built your mobile app against it. To implement your web frontend, is it easier and better to roll a whole JS app from scratch or just interface with the APIs that already exist locally to produce a LiveView app? Obviously there are some app characteristics that dictate this, but for a lot of things LiveView is still going to be easier. But then, maybe I'm biased because I dislike frontend programming!
I'll also say, one of the key lifts for LiveView is specifically (unsurprisingly) live updates. That is, server pushes.
Websockets don't really make sense to use as a broad bidirectional communication tool; they're overly complicated for situations you can just open a plain TCP socket to communicate over, and making them a requirement for clients that have no need of them is a poor ask. So as soon as you're supporting a client other than a browser, while still supporting a browser, you already are going to likely want to support two APIs.
Good design will allow you to share your model for bidirectional communication regardless of the connection type, and you can then bake in any updating JS into the Websocket connection via LiveView or similar, while exposing a more normalized socket endpoint for other servers, mobile clients, etc. Even if you end up having to support multiple connection assumptions at a later point (i.e., a Websocket connection for your web client that contains JS, a Websocket connection for a web client being developed by a different team that should just return data, and a plain socket connection for non-browser based connections), the lift should be pretty small.
I am currently developing sites with Django and DRF, where the client requests what format it wants and the server sends it in the format of the clients choice- a browser would request HTML, in which case the server sends HTML- using htmx. A mobile app might request JSON, or XML or CSV or any other supported format. No need to develop a separate API for web and mobile- its all the same API. DRF makes it very easy.
Now, I haven't dug into Apple iOS dev yet, I know they have some artificial limitations on what an app can receive, but thats an Apple limitation, not a technical one. I'm pretty sure Apple allows an iOS app to receive data over the Internet and render that data within the app? I just need to figure out what format it can be sent it for the app to render it.
Your app can certainly request whatever data in whatever format from whatever server it wants. Making substantial changes to the app's functionality using this channel is disallowed.
In the other/current model you're building total 3 components: web frontend, mobile frontend and the API and in the liveview model you're again building total 3 components: LiveView for Web front/backend, and mobile frontend and API for mobile backend.
If done properly, the LiveView backend can share a lot of code with the API backend. Moreover, an added advantage is that the Web codebase and Mobile codebase can be different depending on the requirements of the two platforms, except where, as noted above, it makes sense to abstract it out and share. So, IMvHO better software engineering in general.
It's not a hard problem, but I consider it to be a dumb problem.
Let's say I have absolutely zero business purpose for an API - like zero. Neither general purpose or single purpose. I now have to make one to serve a native/mobile app. I have to pick language, I have to design API surface, pick an RPC framework, REST or GraphQL, I have to adopt a different testing strategy for it, etc.
When all I really wanted to say was: "Hey server... can you serve a slightly different template connected to all the logic I've already written that a mobile app can understand as long as the app contains a component library?"
As for the opposite model where you are API first, at least you have the language, API contract / technology, testing strategy in place. And it will work regardless of what your clients are! The clients then implement their own UI, ideally on top of some shared, headless client (maybe a native module?) and you have maximized code re-use. Only the UI tech is different (and that's a maybe, since you could use flutter or RN), and is tailored to each device nicely.
So I do think the second is VERY desirable because it's single paradigm and you can go pretty far to reduce duplication of effort, almost entirely.
There is no such possibility in the server-driven paradigm, and I'd like to see it because it would give me all the re-usability benefits of the second approach, with a huge advantage of making the clients leaner.
It's a personal thing, but I personally don't want to smash my web-app backend into the same service as a general purpose API. It requires a lot of discipline to keep the concepts separate, and it fails more often than I've seen it work. I do appreciate your comment here of "better engineering" because I so very much wish for that. I wish that the average Django/Rails/Phoenix/whatever framework would not turn into a swampy piece of junk when you keep both responsibilities in the same codebase, but they often do.
The options we have are pretty decent - I just think something like LiveView could be better. Its only promise is a (admittedly slicker) take on SSR for the web. That doesn't move the needle far enough to be revolutionary to me, and doesn't solve a problem that most people have. The problem we have is that the server paradigm is way, way behind where it should be when it comes to serving different types of clients.
The applications developers publish native, Swift applications, where Swift have the full access to the Webkit/Blink api the same way Javascript does and even more with the patterns from the Chromium renderer (for instance, access to lifetime events, OnDocumentLoaded, OnVisible, OnFirstPaint, etc..).
The offline first comes from the fact that every application runs first as a daemon which provide a gRPC api to the world of the services it offers, where the web-UI application process might also consume this same api from its daemon -process manager or from other applications.
Note that the daemon process who is always running give access to "push messages", so even if no UI app are running things can still work, like syncing into the DB, or even launching a new UI application given some event..
This service process also is pinged back by the web-UI process for every event, so the daemon process can also act when theres a need (when everything is loaded on the UI, do this..)
Also about the article, note that with this solution you will code in pure Swift (not WASM) just like the article is pointing out the web applications that can be built without any javascript.
Other languages like Rust, C++, Python can be added giving the Swift applications are talking to the runtime through a C api. (And i could use some help from other developers who also want this in other languages)
If you want to ship your applications with this, you will get a much better environment than Electron, using much less resources, as things are being shared to every application, and the cool part is the applications can consume one another services (for instance a Mailer application) forming a network, and depending one-another (the magic here is that they are all centrally managed by a core process that is always running)
As always, this depends on the use case. For in-house applications it is often far simpler to keep a lightweight frontend and mobile app is often of no concern. For these kinds of web application a server-heavy architecture was always a good choice.
Of course, this is true. But I prefer architectural patterns that are flexible for future use cases, especially when there's no logical reason why it shouldn't be supported.
You can start this way, but if you have a new requirement, you might reach a technological crossroad and ask yourself "why do I have to add a new paradigm? If the thing I'm already using could just do X, I'd save a shit load of time and/or money."
You can always add, yes. But why do I have to?
The world has treated web and native as separate for far too long. We're really just squabbling over UI toolkits, so why can't we come up with something that just says "Fuck it, we're supporting both native and web as first class in every server framework because that's how it should be. And we're also going to let you use any language in the browser because that's how it should be"
Yes, but as engineers we should strive for the simplest possible approach that solves the problem in a maintainable way. Always choosing the most flexible / the most scalable / the most modern approach is often a waste of resources.
> why do I have to add a new paradigm?
The answer to that question also depends on which paradigm you are using currently, not only on which you are choosing for the future.
I hope one day we can. Though the incredibly tightly coupled server-side-logic nature of LiveView is, I think, a bad fit for the intermittent network architecture of mobile devices.
But in theory I see no reason why you couldn’t deliver LiveView to some mobile runtime that controls its native UI via some abstraction layer. Hell, some of the React Native stuff I’ve seen gets close already.
I'm happily using Phoenix+LiveView for a website that works as a PWA. This also includes offline mode with sync-when-internet-is-available functionality (thanks PouchDB!). This covers online/offline/mobile usecases, with no explicit RPC, and no requirements for the end device that the app would run on, other than a _capable_ browser.
(Admittedly, browsers on iOS are gimped, but that's part of the Apple tax).
I don’t understand. How is the app working in offline mode when the Phoenix server is not reachable, since the interactions are managed server-side with LiveView?
Sorry, my statement indeed got confusing. I'm using LiveView for online interactions. I'm using PouchDB + JS for data synchronization and enabling offline work. You are right that LiveView itself does need to be online, since it's whole point is to move (back) the UI logic to the server side.
I think this is just poor phrasing on my part since mobile was top of mind. I'd like to see this work with any native UI that supports an online feature set.
I'd like to see the technology work with HTML or server driven native UI, and let developers decide which UI toolkit is best for their app market.
Since it would theoretically support both, you could incrementally transition as well.
What's incredibly frustrating about the knocks on JavaScript is that it's not a shortcoming of the language that creates the problems hinted here, but the poorly-considered engineering choices of the popular frameworks.
After building my own framework [1] last year, I realized just how disconnected JavaScript developers—yes, the "big boys" are especially off in CS la-la land—were from web development fundamentals. HTTP distilled down to its absolute essence is brilliant, but for some reason gets absolutely ignored in the JS world. When you couple that with websockets for incremental updates (similar to what Chris McCord is doing in Phoenix), it's insanely powerful. It can be reasonably lightweight, too, if you're only shipping the JS you need to render the page.
That rendering some HTML, CSS, and interactive JS in a browser has been turned into what it has is staggering. Though, not surprising, when you realize a lot of the momentum in JavaScript the last decade or so was perpetuated by venture capital (and the inevitable fast and cheap nature of that world) being pumped into inexperienced teams.
Most languages are quite tough to get into. And they require some real learning if you want to do anything trivial. JS is easy to get started (only the browser you already have) and you can do quite a lot without understanding anything about computer science. But it catches to you anyway. I've seen on Twitter someone repelled by the MDN documentation, describing it as not intuitive. I realized that the documentation assumes you're familiar with the basic tenets of programming (data types and structures, function signatures, ...). Which bootcamps often conveniently ignore.
That seems crazy. I feel like MDN documentation is done so well. It balances well between giving good examples, and yet still remaining exhaustive in what it describes. Most extensive documentation ends up being less than useful and I have to fall back to just search results.
W3 doesn't have much in terms of English explanation, has less examples, and their playground forwards you to another page as opposed to MDN where it's embedded.
MDN documentation and Sqlite documentation stand out to me as some of the best around. The Cloudflare how-tos would be up there, if you stretch the definition of documentation
Tailwind CSS also has super nice documentation. Really quick omnisearch, and covers basically everything in-depth with good examples, and what CSS is generated from each class.
For real? MDN's documentation is some of the best I've ever seen... Complete, frequently-updated documentation of all Web APIs including comprehensive examples of typical usages with clearly-worded explanations! It also links directly to the actual ECMAScript specs that define what the documentation is describing. I mean, how could it get any better?
JavaScript used to be easy to get into (and I suppose technically still is) but the old days of open a console and type in some code or put a src file at the top of the HTML page seem to be gone for most practical and production purposes. Now it's typescript and webpack and a 200 hour course to get started with React. And big teams tripping over each other making components.
The funny thing is, I don't know that the resulting product is really much better.
Yes. What you describe is where I started out and I watched it devolve over the last 15 or so years (why I decided to take a swing at building Joystick—it was getting too ridiculous to keep entertaining). I think so many started out in the chaos that they don’t know what the “before time” was like and don’t have perspective on how complex it’s gotten (it’s just normal to them).
the parent seems to imply that having learned to program from a bootcamp you will be unfamiliar with the basic tenets of programming and as such more likely to find MDN documentation unintuitive.
Well, that is my experience too. My background is game programming school, after first learbing flash, second trimester was about making a game using html/css/js (no pixijs or phaser here). I did find MDN hard to read and preferred W3School. Today however I tend to prefer MDN.
MDN is richer, more exact, more complete. But being minimal is better when you already unfamiliar with basics.
TLDR : MDN pair well with W3School, imo both are fine as they are.
How does your framework handle re-rendering large components? E.g. a table with 1000 rows. I can't see any obvious mention of virtual DOMs so I assume you're not using them.
You might say virtual DOMs are exactly what you're talking about when you talk about the design choices of popular frameworks.
It fundamentally comes down to the idea of writing functional components. I declare exactly what I want my UI to look like as if render() is called every frame. But I don't care _how_ it gets there.
The complexity comes from taking this declarative UI and making it performant.
I don't doubt that 90% of web pages could get away with not using React or Vue or Angular. But as someone who loathes writing complex applications with pure HTML and JavaScript, they do serve a purpose.
I've written web apps that are basically just one large render() function with pure JS/HTML and it gets the job done. But at some point you do need to optimise and then it all falls apart.
Virtual doms are a cute idea, but virtual dom diffing will always leave a lot of performance on the table. It’s complex, slow, and entirely unnecessary - especially now we have frameworks like svelte.
It can still work well enough when developers don’t go crazy with divs, and when they use shouldComponentUpdate. But nobody uses that stuff; and modern pages are bloated like crazy.
For my money I think the problem is cultural. Theres a community and culture around frontend engineering now which seems to entirely disregard performance to the throne of closing tickets as fast as possible. Lots of frontend devs I meet at conferences and meet ups have no idea how the browser works, and no real desire to learn. The result is disasters like the new reddit homepage - which needs more horsepower to render smoothly than AAA video games did a few years ago.
The GP is right. There’s an insane amount of performance being left on the table. I don’t think it’s a technical problem. It’s a cultural problem.
I love how the new Reddit gets frequently used as a straw man in arguments of front end developers are bad, the state of front end is a mess, etc. If you care to look at the actual reasons that Reddit and so many modern websites really are terrible, you'll see that it comes from the top and in many cases is on purpose.
Reddit (and many others) don't want you to use their website. Especially on a mobile browser. They want you using their app where they have access to more of your data and generally keep you longer. And also where the initial payload of advertising and analytics cruft feels faster to you because you probably are more inclined to be patient with the mobile install/update lifecycle than you'd be with a browser. The GUI changes too were much more likely to come from product management than your lowly front end bootcamp graduate.
I bet a huge percentage of Reddit’s traffic comes from their website. And if they didn’t want people using the website, they wouldn’t have bothered putting so much effort and attention into the redesign. Why bother redesigning something you want nobody to use anyway? And if your argument is that they made it bad on purpose, they may as well have just made their old site worse.
Also, if their business model is analytics and ads, why do they allow 3rd party reddit clients like Apollo?
I hear what you’re saying, but I stand by my perception of what’s going on.
My focus for v1 is on developer ergonomics and solidifying an API (this will be frozen after v1 with the only changes being additions if absolutely necessary—a component you write today will look identical to one you write in 10 years).
All future major versions will be solely focused on performance and security (my way of saying: I haven't stress-tested renders, but for the time being it will work well for the majority of use cases).
Edit: the lack of mention of how I do it is intentional. The last ten years saw brilliant inventions turned into mush because all of the leading project devs got into an unacknowledged dick measuring contest with one another (i.e., passive-aggressive jockeying for dominance that was ultimately a distraction from building software that was usable).
> Though, not surprising, when you realize a lot of the momentum in JavaScript the last decade or so was perpetuated by venture capital (and the inevitable fast and cheap nature of that world) being pumped into inexperienced teams.
Yep. Frontend frameworks get you a slick looking responsive GUI with not much effort (you are outsourcing most of the design work). This wows the average VC. The VC funds you rather than the team with an architecturally simple frontend, and the framework flywheel gains momentum.
> Yep. Frontend frameworks get you a slick looking responsive GUI with not much effort (you are outsourcing most of the design work). This wows the average VC. The VC funds you rather than the team with an architecturally simple frontend, and the framework flywheel gains momentum.
And it will stay working as long as you never need to update any of the packages. And then the rapidly changing, highly co-dependent nature of the FE npm ecosystem will bite you in the ass unless you were really careful with your choices - which you weren't because giant megacorps said it was 'best practice'.
In less than 10 years, being able to recompile a modern projects will be a new kind of jobs, going around legacy version of webpack with legacy version of babel that support some syntax that would have never been made a standard with a spaghetti of deprecated library like axios who only became popular because their users was too lazy to look at the standard.
That will all become very interesting if we start to find some XSS issues in today's major frontend framework
> I realized just how disconnected JavaScript developers—yes, the "big boys" are especially off in CS la-la land—were from web development fundamentals. HTTP distilled down to its absolute essence is brilliant, but for some reason gets absolutely ignored in the JS world.
Can you please elaborate what these areas of disconnect are? I looked at Joystick documentation hoping to find more of your thoughts this topic, but could not.
I've tried writing a response to you a few times but realizing how complex it is—the real answer to your question is worthy of a long blog post.
The core problem is the blurring of the lines between a framework and the underlying technologies of the web (HTML, CSS, and JavaScript). To me, it seemed like a lot of unnecessary complexity was being added just to achieve the common goal of rendering some HTML, styling it with CSS, and making it interactive with JavaScript (what I refer to in conversation as a "need to look smart" or "justify having a CS degree").
For example, introducing a new syntax/language like JSX (and the compiler/interpretation code) when plain HTML is simple and obvious. The drive to make everything happen on the client leading to vendor lock-in by third-party providers to fill the gaps of having a database (e.g., Firebase/Supabase). Making routing seem like some complex thing instead of a URL being typed into a browser, a server mapping it to some HTML, and then returning that HTML.
Where this really became clear to me was in the resulting developer experience. I've been teaching devs 1-on-1 since ~2015 and had a bird's eye view of what it was like using these tools across the spectrum of competency (beginner, intermediate, advanced). The one consistent theme was confusion about how X framework mapped back to HTML in the browser.
That confusion was handled in one of two ways: making messes in the codebase to solve a simple problem (e.g., lord help me with the nested routing in React) or giving up entirely. As an educator, the latter was especially bothersome because the transition from basic web dev to these JS frameworks was so jarring for some, they just didn't want to bother. That shouldn't happen.
---
I'd like to elaborate on these things a bit more. Can you send an email to ryan.glover@cheatcode.co so I can remember to follow up once I've organized the post I've hinted at above?
Every framework has its learning curve, tradeoffs and frustrations. JSX isn’t that far off from html and can be insanely productive for generating reactive UIs compared to the old jquery days. I’m not sure there’s a one true framework that is easy to learn, sticks to idealistic cs principals and is productive.
> I’m not sure there’s a one true framework that is easy to learn, sticks to idealistic cs principals and is productive.
My goal is to make Joystick that. It's less about sticking to hard principles and more about keeping abstractions thin and APIs clear. I don't expect everyone to agree with me, but I'd bet money if you plopped a newer developer in front of a Joystick component vs. a React component they'd pick up the former much faster (my personal standard of whether I'm "getting it right" or not).
You say that JSX adds complexity and plain HTMl is simple and obvious, and yet, when I look at joystick, I see a render functions, and non-obvious helpers like `each, when, component`, etc. JSX is really simple to understand for basic use cases and it provides so many benefits:
- support for Javascript expressions
- transpilation
- better error reporting
I'd also argue that JSX is overall a LOT more readable than some of the more advanced examples you provide in the Joystick docs, especially because you don't get syntax highlighting in javascript template literals.
> The one consistent theme was confusion about how X framework mapped back to HTML in the browser.
I'm not saying you're lying, but in the case of JSX, I find this very hard to believe.
I do 100% emphasize with a new developer approaching react who starts with `create-react-app` and has to figure out which routing package to use, etc. But projects like `next.js` reduce the complexity of the mental model significantly IMO (file system routing, sane defaults for compilation, great local dev experience, etc).
Each renders each item in the array you pass, When renders the html you pass when the condition is met, and Component renders another component. Those are all used inside of a plain string, leveraging JavaScript interpolation (core language feature, no hacks). Again, I want you to be able to go from learning pure JavaScript to Joystick. Assuming you learned about stuff like interpolation and template literals, you’d have an existing mental model for understanding what’s happening. Coupled with writing plain HTML, it’s as close to the metal as you can get.
As for Next, it’s certainly a step forward, but the folder-based routing is another abstraction that obscures the concept of a URL and uses hacks (like weird folder names to support params) to get what is a one line string in something like Express (the router in Joystick). Nothing inherently wrong with that, but it breaks affordances around how the web works. Check out the Next boilerplate on Github on the CheatCode account. Working on that furthered my motivation to build Joystick.
As for syntax highlighting, you can get this by prefixing an html`` before the backticks for your string (I need to spend some time automating this so IDE’s respect the HTML in the string w/o a helper).
To be frank, based on your general tone and username, you sound like someone from one of these projects (or your livelihood depends on them in some way) and got your feelings hurt.
>The core problem is the blurring of the lines between a framework and the underlying technologies of the web (HTML, CSS, and JavaScript).
I believe the core problem is that we've switched from a static document model to a dynamic application model, yet kept at the center of our focus the idioms and technologies meant for the former.
> Can you please elaborate what these areas of disconnect are?
I'm not the OP but something I have noticed is the JavaScript wünderkind seem to want to completely reinvent web browsers but in JavaScript. They seem to not actually know HTML, CSS fundamentals, or even HTTP. So you end up with frameworks that reinvent everything a browser already gives you (for free) but entirely inside their JavaScript monstrosities.
I might just be grouchy and uncharitable but it's certainly the feeling I get looking at any of the big JavaScript frameworks. It's infuriating to see frameworks reinvent something like a button with a bunch of anchors, divs, and spans when HTML gives them a perfectly functional button with a bunch of built-in events including touch events.
It’s worse than that, not just web browsers, they’re also forcing these frameworks into desktop apps, mobile and embedded because they refuse to learn any other platform or language.
> That rendering some HTML, CSS, and interactive JS in a browser has been turned into what it has is staggering.
I think you are probably ignoring the fact that we have basically converged back to a web monoculture again.
A lot of the current frameworks were born out of "Chrome is shitty this way. Firefox is shitty that way. Internet Explorer is shitty this other way. This framework elides over all that."
Not at all. I cut my teeth debugging CSS in IE6 (front-end Vietnam). I’m an old timer. And based on what I’ve learned, what I’ve done in Joystick could have been done in some capacity pre-ES6 and easily in 2014-2015 when most of these frameworks were born.
I was going to say that I was surprised htmx wasn't mentioned in the article. It's backend-agnostic and extremely easy to use. Drop in Alpine.js and I think you have a really powerful setup without writing any JS. I've been using this with Go[0] and enjoying it.
So, I have only been working in htmx for about a week on a multi step checkout app, and while it's been mostly awesome, I think I ran into the Achilles heel of htmx, form state and the back button.
In short of you enter data into a form then htmx push navigate away, when you click the browser back button you get the dom as it was originally delivered from the server, without any data the user might have entered into text boxes. This is a show stopper problem and been working on work arounds, not sure any is good. Basically we have resorted to plain old full page reloads with client side redirect to resolve this.
The thing about HTMX is that the mental model about state should tend towards "the server is aware of everything". With the added benefit that you can achieve better RESTful URLs that respect the navigation actions. Without having tried this (but having used HTMX quite a lot), I would try the following (let me know if I didn't understand the problem correctly):
Checkout has the following URLs, one for each step:
1) Making sure the server knows how to render all those pages independently (like if the user does a hard-refresh or if they open that URL directly, without navigating to it through a link). Note: I believe this should be the default in any website if you respect the concept of a URL (whether client or server-side rendered).
2) If there's a form input at each step, the server needs to store that info and be aware that the user has an incomplete checkout.
Now the user is at step=2 and presses the back button, or clicks on the step=1 link. In that case, the server should know the information stored in the point no. 2 above, and return an HTML form with the data pre-filled based on the last state. E.g:
1. You need different URLs for each step.
2. Each URL should work both with HTMX (maybe using hx-push-url or the HX-Push header) and *without*. That is, any navigation to that page should also render the same HTML.
3. The server needs to be aware of the state. When a user requests page.com/checkout?step=2, the server should know if the HTML form requires pre-filled values.
This increases a bit the complexity on the server, but I believe it reduces the client-side complexity a lot more.
> 2) If there's a form input at each step, the server needs to store that info and be aware that the user has an incomplete checkout. Now the user is at step=2 and presses the back button, or clicks on the step=1 link. In that case, the server should know the information stored in the point no. 2 above, and return an HTML form with the data pre-filled based on the last state.
The problem is that when you press back button in the browser, it doesn't make another request to the server. It reloads the page from browser memory. A link back can be made to work, but that's not what I was speaking to.
I tested the most trivial case, when you return a response, and push navigate to next step, then press browser back button, form state is not retained. Both chrome and firefox do not restore the form state on navigation back after a url push navigation.
I don't think that without some client side JS that saves/reloads prior form state this is a solvable problem because of browser behavior. Either that, or perhaps use different divs for each step then have js hide/show them which is what I will try next but involves writing js to do so.
EDIT: tried multiple divs on same page, same behavior so that will not work.
It's pretty clear that htmx hasn't considered back button much at all, it also clobbers the page titles in history as well (https://github.com/bigskysoftware/htmx/issues/746), but that's a fixable problem.
What about using hx-trigger="load" for the form? Maybe that makes the browser reload it even after hitting the back button? (Sorry I can't check this right now, just a random idea).
Edit: I quickly tried it (mixing hx-from="#some-other-element" and hx-trigger="revealed", and it seems to be doing the request, but I haven't looked a lot.
Ok, so turns out default chrome/firefox on windows/android were not issuing the history refresh request on back button click. Turns out you need to add some additional config to make that happen. It's now working, and it's excellent.
Set htmx to not cache prior pages by setting `htmx.config.historyCacheSize = 0` in window.onload.
Also set the http caching header `cache-control: no-cache`
One approach you can try is using the localStoeage API to temporarily store the form data of the user. I have done something similar in the past. You can use a light library like Alpine JS to make this work nicely. When the form renders, check if there is any temporary form data in localStorage, if so initialize the form with that. When the user saves the form remember to clear localStorage. This works nicely because even if the user closes the browser or something by mistake it is all there, and you don't need to keep temporary data around in your database.
> What is old is new again I suppose: we used to do this with PHP and jQuery once upon a time too — though LiveView and similar are far nicer of course.
I used to do the same with PHP + Prototype.js back in 2006-2007 before jQuery existed, including pretty weird hacks for non-AJAX supported browsers (using JS to append <script></script> from server-side to the DOM).
PHP, Django, Rails, whatever generates HTML, basically. I also slightly prefer Unpoly.com, because it takes whole pages instead of fragments. This means that if JS is disabled the site just keeps working, though with reloads.
You can achieve full support with JS disabled using HTMX as well. It takes a little more work but HTMX provides headers[0] which you can evaluate on the backend to determine if you should return a partial or not. If JS is disabled, the HTMX headers will be missing and you know it's not an HTMX request.
I think my next site is going to be php+Laravel. I had a lot of fun doing web dev in php. Nothing has matched that since. I never tried rails though. If I see another bumblefucked SPA I'm gonna cry.
If someone took blazor away from me tomorrow, I'd probably consider PHP for a while. The most important part of the programming model is very similar in my experience. I could take a Razor component and convert it directly to a PHP partial pretty quickly.
Around 15 years ago I needed to build an internal app for my company that would aggregate and display a lot of data. Our backends were all in Java and I had sour taste for JavaScript from a previous project so I looked around for an AJAX framework that would allow me to avoid writing any JavaScript. Lo and Behold there were actually 2 viable ones on the market: Google GWT and Echo2. I played with both of them and Echo2 blew me away. It looked and acted better than anything else on the market at the time. Rich widgets, translucency, works in (almost) any browser, etc. I rolled an app with it. The mechanism of operation was pretty similar to LiveView except WebSockets didn't exist at the time so it worked by having a super thin JavaScript client that polled the web server on the regular basis. So yeah, a bit chatty but for intranet app I didn't care much. Everything worked and felt like it was a Java Swing app but in a browser. The app became popular and we started writing extensions to it and adding modules. Then inevitable customization came. "Can we create a widget that does X?" "Ughh let me take a look.". This is when my nightmare began. Writing a new widget for this thing required putting together some utterly nightmarish JavaScript code and then compiling it together with equally nightmarish piece of Java code. If you had any custom stylesheet you basically were screwed. So roughly 2 years down the road from starting the project I started rewriting it in pure JavaScript and ExtJS (now called Sencha), which was one of the first few juggernaut frameworks for writing SPAs circa 2007-2008. That's why when things like Blazor and LiveView roll around I get flashbacks akin to post traumatic stress disorder.
I remember when Sencha came on the scene :) WebSocket's help for more efficient bidirectional comms and guarantee load-balanced process placement, but LiveView can also be used with long polling if folks have a hard requirement. We also have much better DOM apis for things like efficient diffing/patching that I'm sure you lacked back then. Were you keeping stateful "widgets" on the Java side or hydrating from client state for interactions?
>We also have much better DOM apis for things like efficient diffing/patching >that I'm sure you lacked back then
Oh I hope so. I marvel at React virtual DOM magic on daily basis so I'm sure it can be all done a lot more efficiently these days.
>Were you keeping stateful "widgets" on the Java side or hydrating from client >state for interactions?
IIRC it was 90% on the Java side. The default polling rate was something along the lines of 200 ms. So if you clicked a checkbox on the front end, it would perform the animation for clicking it on the front end but it wasn't considered checked until the backend was informed of it and signed off on it. In Java code if you had a listener for onClick you would write a normal Java method and it would get invoked normally with access to all your server stuff. As you can imagine, Echo2 server session objects could get pretty heavy. Reminded me a bit of mainframes and terminal clients lol :)
Chris McCord, quoted in the article, explains extremely well the absurd state of stateless http requests, from a perspective that is not appreciated in the article (queued to 40m50s):
The programming model (in liveview, don't know about blazor or hotwire or livewire) really lets you get better performance by doing less. A part of me sarcastically thinks wow, damn, deleting data is irreversible data transformation and therefore increases entropy, every stateless http request is inching us closer to the heat death of the universe.
He just glosses over the "you can use a websocket API", which would solve most of the issues he's describing. There are a ton of websocket libraries that are easy to use. Do all the auth and session establishment once, and then communicate over websocket if you hate sending cookies and session data back and forth.
No, you missed the point. Liveview works because there’s a stateful Elixir process on the backend for each open ws connection. This model doesn’t work in nearly any other backend option in any other language. If he glossed over the web socket option you’re glossing over the uselessness of the ws if the backend has the memory of a goldfish.
I think it is catchy and saying "I tapped danced my frontend" seems to work for me and reflects the speed of development that CLOG and Common Lisp offer. https://en.wikipedia.org/wiki/Clog_dancing
I just shipped my fourth large project based on Phoenix LiveView and I think it is the best thing that happened to web dev since forever.
The javascript is minimal (like to manage selection in text field or to handle copy/paste). Everything else is in elixir. Tailwind is used for CSS.
Having the full client state available on the backend is really incredible. It is so much easier, no more ajax/graphql... You just have all your backend data "under your fingers".
I can only recommend giving it a try.
Also, if your app is offline first, phoenix channels are great for sync. It is not live view, but it is easier to use than ajax calls.
> Having the full client state available on the backend is really incredible.
Got the exact same feeling writing my latest app with Laravel Livewire. The ability to use backend state seamlessly on the frontend without explicitly passing state back and forth reduces complexity so much it's like writing 1 application instead of 2. I was able to build rich, fully interactive apps with minimal JS -- I'll never go back to React again.
A question from a newcomer to the ecosystem: do you have any recommendations for an admin system like Django's? On a related note, how do you usually determine whether a package is abandoned vs stable?
out of curiousity, how difficult to support large project in elixir, especially refactoring, giving there is no static typing. I know about pattern matching, which helps, but interesting to hear practical experiance
The first one is counter intuitive to many programmer. Basically you use pattern matching aggressively, like this:
def full_name(%{first_name: <<a::utf8, _::binary>>, last_name: <<b::utf8, _::binary>>}) do
"#{a} #{b}"
end
This code will work only if a map/struct with the correct non null field are passed. The pattern match utf8 string of at least 1 valid utf8 character. Any other argument will crash.
Crashing in elixir is the way to handle unexpected things. For example, you will do:
{:ok, myobject} = DB.get....
And if the DB fails (something unexpected), it will crash, the process will be restarted and the live view connected remounted and it will start again from a clean state.
Of course, error that are expected should be handled.
The second thing with elixir, is that functions are decoupled from data. You can move them around easily. For example, the above function will work on any struct with the correct fields and a plain map. But you can pattern match on specific struct too, which makes the code more like OOP, but it is more rarely done.
The key idea is to restrict code paths to something you expect, and always be explicit about what to expect.
When it comes to refactoring, the fact that Elixir is compiled helps a lot - there are a set of mistakes you make during a refactor which the compiler will just catch for you.
Initially, I thought this was going to be about WASM, based on the title.
//
The thing about rendering and processing things server-side and relying on very little, if any, JavaScript is that it makes it possible to send very small amounts of data to the client but still handle demanding tasks.
There's something to be said for making a website usable for even the most anemic of client hardware.
I remember JSF (Java Server Faces) and GWT (Google Web Toolkit). Both burned in flames.
JSF: main idea is to abstract away boundary between server and client. Turns out -- you really want to know where that thing is running -- on the server or on the client. So it turned into a fight against that main idea of JSF.
GWT: main idea is to forget JavaScript/DOM and write pure Java. Turns out -- you really need to know JavaScript and your DOM to write GWT. So it turned into a fight against that main idea of GWT.
I am frequently surprised by folks that are active fans of both. So, I don't think they are dead dead. But GWT, in particular, really helped cement a ton of distrust for any framework coming out of Google.
JSF was trying to make handy components for JSPs. And made them unusable in the process.
Also Vaadin which sort of combines two approaches and these days even allows you to go Java-only or Javascript-only. The company is still alive and well so I assume it's popular for internal enterprise apps.
JSF and GWT were rightly abandoned, but the new frameworks are different in ways that might be relevant. One of the main problems with JSF is the mismatch between it's stateful programming model, and HTTP's statelessness; as I understand it, LiveView uses a persistent websocket connection to a stateful server process, so that mismatch doesn't apply. Another JSF problem is it involves a component-based templating system that abstracts quite a long way from HTML, making it hard to figure out what's actually going on. LiveView, Hotwire, and htmx all seem to be much closer to HTML, which might make them easier to work with.
I admit I'm still to scarred by JSF to be keen to try out these new frameworks and find out if they've actually solved these problems, but I'd love to read an evaluation of the new server-side frameworks from someone who experienced the problems with the old ones.
I've worked quite a bit with JSF and although I was never a fan it does have some points. It allows for focusing more on contents than visuals also assuming the visuals won't be that fancy. There's plenty of apps that don't really require going all-out on eye candy. Note that it was always a problem when people do want eye candy. I also rarely had issues with performance but perf. is always an issue regardless of working with server- or client-side rendering.
I pray for WASM to eventually replace JS. NodeJS ecosystem with all the experimental features, job ads in update notes, shitload of dependencies, one-function packages, legacy JS is a special kind of pain that I avoid whenever I can.
There is not BE or FE lang. The browser (the universal VM) just only takes JS and WASM at the moment. So they can be considered browser NATIVE.
TypeScript is not native. Is it then also a BE lang?
What is happening here is a big rift in the programmers community between the "I'm productive in it so it is great" and the "I prefer to use strong typing and proofs to ensure it does not break at runtime".
And the second group has managed to compile more-and-more of it's langs to JS and WASM.
I actually appreciate domain-specific programming languages. It's fine (for me) that JS is the native front-end language. It's fine that Rust targets systems and embedded. I love that Python is a middle ground, and really great backends are built on it. I'm great with C being a really low level language that forces me to think about the machine.
Taking "I'm productive in it so it is great" to an extreme, we end up seeing precisely the kind of domain crossover TFA hints at. Somehow, since Rust is cool, it should be the future of web / in-browser[1]. Somehow, since JS is popular and easy to use, it should be on the james web space telescope[2]. Somehow, since browsers are well accepted, node.js is a reasonable benchmark for embedded systems performance [3]
GTFO with that. That is how we get language bloat and the many overlapping frameworks that creep into the language and make it unrecognizable. I'm looking at you, Tokio and D3.
> Somehow, since browsers are well accepted, node.js is a reasonable benchmark for embedded systems performance [3]
NodeJS relies on V8, which is perhaps the world's most highly optimised interpreter (/JIT compiler/runner), and a spectacular piece of software. The v8.dev blog alone is fantastic reading for anyone who works on compilers, as I do.
I don't like the 'dev' tendency in the programming community, to just hack stuff together and say "well, it works!", but I'm also not impressed by people trashing any high-level languages just because it's a meme they heard.
V8 is phenomenally performant and is a hard benchmark to beat for anyone and anything, even at the compiled end of the spectrum. (Yes, it's beaten by C/C++/Rust/Go, but it's not to be sneered at, at all.) I'm struggling to equal it with the highly optimised Python JIT-assisted interpreter I'm writing for my work - even after relaxing the ABI and writing sophisticated instruction-set-specific optimisations. (And I'm writing it in Rust, just to score at least one point off the cargo-cult bingo card.)
That is impressive, and not unexpected given the (likely) billions of dollars of investment in things like nodejs. I did not mean to disparage it in itself just because it was developed for a particular domain.
However, if "An embedded system uses the internet to communicate" has become synonymous with "It runs linux and uses web services and nodejs" then that would be a perfect example of contorting the application to match the technology that more programmers are most productive with.
> I did not mean to disparage it just because it was developed for a particular domain.
Fair enough, that's all I was saying. I'd agree with you that the increased use of Node everywhere to build applications, with tools like Electron, is an utterly stultifying trend.
It's fantastic for I/O-heavy 'server'-type code, and for using JS as an at least surprisingly performant interpreted language, but it's disappointing to see it replacing languages like Swift or C++ in computation-heavy systems programming - or domains which should be the preserve of systems programmers.
> However, if "An embedded system uses the internet to communicate" has become synonymous with "It runs linux and uses web services and nodejs" then that would be a perfect example of contorting the application to match the technology that more programmers are most productive with.
I absolutely agree. No quibble with you there! I'm a fan of using the best tool for the job. And, much as it's a meme, I absolutely adore Rust for lots of systems programming applications.
PS: If what you love about Rust is its safety even more than its speed, then I'd recommend looking into Ada, which could be called the precursor of Rust. It has none of Rust's modern trendiness, but it's a mindblowing accomplishment in TypeScript/Haskell-grade type expressivity (with Rust-grade safety) while still retaining I-believe-greater-than-Rust speed.
I have found nodejs frustrating for CRUD APIs, the is 'fantastic for heavy I/O' while true ends up guiding you to complex microservices(or trying to use a external services for everything) because you don't want to slow down the event loop as you add more and more cpu processing to your app.
Interesting! I'm not a Node expert specifically, just a general systems programmer, but I might be able to give you some pointers.
Is your application I/O-bound, or CPU-bound? i.e. what's the bottleneck, which you would have to increase in order to speed it up? Your comment is a bit ambivalent given your remark about adding "more and more CPU processing".
If you're I/O-bound, then you're free to do more CPU processing. If/once you're CPU-bound, there are a few questions:
- Are you using all your cores? Node is single-core, so you may need to run one Node process per core. This obviously depends on how parallelisable your program is. Edit: u/eyelidlessness has given some more Node-specific suggestions that may allow true shared-memory concurrency, or even shared-memory parallelism across several cores.
- Are you able to increase your CPU's clock speed? (This obviously assumes a cloud environment or something similar, where you can easily swap out CPUs. I'm not talking about overclocking.)
- Have you profiled exactly what is using so much CPU? Is there some wasteful computation you can remove? Try `node --cpu-prof` to generate a profile. If you're unfamiliar with analysing profiles, Brendan Gregg's blog is the place to go. This article from the guy who wrote Sled is also a very good longread: https://sled.rs/perf.html
I'm surprised if you're really using so much CPU in a Node application, at least if it's a typical CRUD one. I'd strongly suppose that you're doing some wasteful computation, either in an algorithm in your business logic, or else in inefficient JSON parsing. Let me know if you can give any more info :)
Edit: It looks like u/eyelidlessness has given some more Node-specific tips for improving CPU saturation. I'd definitely check out the pointers that he/she gave.
Maybe so many as to cause choice/research paralysis. If you’re primarily interested in writing JS/TS, worker threads are a great option. And for most use cases you don’t need to worry about shared memory/atomics, postMessage has surprisingly good performance characteristics.
Workers threads spawn a new JS VM( which implies a new GC) for each worker! We tried it and the gains from parallelism stopped when there were half as many workers as CPUs.
They do create a new VM isolate. That’s a cost your workload will need to exceed before you get much if any benefit.
As far as thread count, my default guidance is 50% of cores if your CPU has SMT/hyper threading, coreCount - 2 otherwise.
Those numbers can go higher depending on how much of your workload is CPU-heavy. If you have an even mix of compute and IO, for example, your threads will frequently be idle/less contentious (same principle as the single threaded event loop).
And if your workload is a queue of short lived, isolated steps, I also recommend pre-spawning idle threads (potentially well beyond that active maximum). Pre-warming those isolates can help as well, or isolating with vm APIs (eg SourceTextModule) instead.
As with, well, everything: your mileage may vary, it depends on your actual bottlenecks and constraints, as well as your tolerance for tuning.
A CRUD API doesn't mean "just talks to the database and nothing more", a simple example and one of many is how PDF generation can f*ck up your event loop.
My CS advisor in undergrad stirred a love of safety in me with Ada. I've always wanted to get back into it. Recent headlines have made me think that Rust will move in that domain. In 10 years, probably a lot of safety-critical systems will use a lot of Rust. Ada was (to my knowledge) never particularly well accepted by "the masses", whereas Rust is. It was just too early.
Oh, nice! No, Ada doesn't seem ever to have achieved the same mainstream adoption, or at least awareness, that Rust has. Which is startling to me, because it's comparable to Rust in performance while being far far greater in safety, including inbuilt support for design-by-contract (honestly I feel like I'm shouting at a brick wall trying to get people to understand the benefits of DbC; Rust's type system is only a first-order approximation).
I agree that the problem seems to be that most 'practical programmers' don't understand the benefits of PL-research-y features, until they're forced to use it and then suddenly it takes off (cf the growth of ADTs or dependent typing after TypeScript introduced people to them).
The other problem is the perennial 'trendyism' in programming, which I hate. People won't investigate interesting languages from the 80s with unusual features; only once something's added to the JS framework du jour does it achieve wide adoption.
There was a thread a while back which covered the various variants of Ada and how their memory management contrasts with that of Rust: https://news.ycombinator.com/item?id=14210258
Basically: no, it likely hasn't changed much since you learned it, unless that was genuinely decades ago. But there are various solutions (or, at least, ideas which are considered solutions, depending on what you consider the problem to be) that you might have missed!
Sorry, I should have been clearer: I meant that it's had the most manpower dedicated to optimising it, not that it's necessarily the most optimal. It's hard to make apples-to-apples comparisons in that respect, given the significant differences between different language specifications. (For instance, it's hard for anyone to write a reference-compliant interpreter for Python which is as fast as the interpreter they could write for, say, Ada or Haskell.)
Well you did say: “ NodeJS relies on V8, which is perhaps the world's most highly optimised interpreter (/JIT compiler/runner)”
But even in manpower this is pretty hyperbolic compared to Java and .NET. There was probably more manpower in even some of the third party Java runtimes like Azul.
> Well you did say: "NodeJS relies on V8, which is perhaps the world's most highly optimised interpreter (/JIT compiler/runner)"
Yes, and that's exactly what I meant. It's undergone the most optimisation. That doesn't mean the result will necessarily be more performant than some other interpreter for a different language which is simpler to interpret. By that logic, I could write an 'x86 interpreter' which would blow LuaJIT out of the water...
> But even in manpower this is pretty hyperbolic compared to Java and .NET.
I wasn't including Java because it's not strictly an interpreted language, though I admit you could write a whole book about the philosophical differences between V8's abstract machine vs Java's bytecode VM. .NET I don't know so much about. As far as I'm aware, .NET hasn't had as much manpower invested, though Java has & would certainly beat V8 if you included it.
That is precisely the point jvanderbot made: nodejs has less-than-optimal performance, and not for lack of trying to improve it. Its hard to write fast implementations of js or python, so they should not be used where speed matters.
> nodejs has less-than-optimal performance, and not for lack of trying to improve it
I think you mean "less-than-C" performance, no? I'd say it's pretty close to optimal performance - it's just that the optimum is not that high compared with a stricter language, or indeed a compiled language, which are able to exploit far more invariants (especially in the latter case).
Anyway, I think his point was fundamentally correct, but it just seemed like an odd choice to single out one of the world's best interpreters - and one which is fairly competitive even with fast compiled languages - as your example. If he'd chosen CPython, I'd have been with him all the way, haha. (As someone who's written a faster Python interpreter, ...don't even get me started on that one.)
To be fair, there's a bound how much the JIT/interpreter can do for a given language. Lua is a lot simpler, and a lot easier to optimize as a result, compared to JS.
If you squint a bit, Lua and Javascript are basically the same language. Lua shares many of javascript's peculiarities and warts (strings you can do arithmetic on, all numbers are doubles (originally), arrays are objects are hashtables of sorts, global scope by default etc). Now it's of course true that Lua is less of a mess than javascript, but a) the complexity explosion of javascript postdates V8 quite a bit b) lua has plenty of optimization challenges of its own that javascript lacks (e.g. metatables) c) the relative amount of resources poured into both projects differ by orders of magnitude (3?). I'm pretty doubtful anyone tasked to build the fastest JIT in the world from scratch upon being offered a choice between targetting lua (with luajit's total budget) and javascript (with v8's total budget) would pick lua.
No offence, but have you written any compilers or interpreters? The points that you discuss (all numbers are doubles, strings have arithmetic methods) may be performance concerns for application developers, but they have very little to do with the optimisations you can make as a compiler/interpreter writer.
The only one that's somewhat relevant is 'global scope by default', but this doesn't touch the surface of the issues that make JS hard to optimise, such as the fact that your, say, memoisation of an object property or method may be broken by an `eval` call of an arbitrary runtime value somewhere else in the code (which, due to asynchronicity, could take place at more or less any time from the point of view of your given 'peephole').
> No offence, but have you written any compilers or interpreters?
I have, but nothing sophisticated.
> The points that you discuss [...] may be performance concerns for application developers [...] but they have very little to do with the optimisations you can make as a compiler/interpreter writer. [...] The only one that's somewhat relevant is 'global scope by default'
I didn't mean to imply that these where the three common traits that make both Javascript and Lua particularly hard to optimize, I just picked them as examples for how Javascript and Lua are closer to each other than most other dynamic languages.
But let's dig in a bit on your claim that things like all numbers being doubles or having a array cum map cum record type has very little to do with the optimizations you can make as a compiler/interpreter writer, because it sure seems to me that LuaJIT and V8 do a bunch of optimizations around these things. Both have dual number representations under the hood and will try to avoid representing numbers that remain in the domain of 32 bit integers as double values internally when that gives performance gains. The logic for figuring out if that's the case doesn't seem to be super-straightforward or target architecture independent from looking at the comments in <https://github.com/LuaDist/luajit/blob/master/src/lj_opt_nar...>.
LuaJIT furthermore uses NaN tagging (as do some JS engines, although not V8), which looks less attractive to me as a representation strategy if your numbers are not all/mostly notional doubles (as is indeed the case in newer version of Lua where 64bit integers are the dominant number type). Do you disagree?
Also, as far as the super-flexible lua tables are concerned, I'm pretty sure LuaJIT goes through some amount of trouble to specialize various common use cases of tables, e.g. as arrays without holes, and surprise, so does V8 (https://v8.dev/blog/fast-properties#elements-or-array-indexe...). I don't think you'd find something equivalent in a high performance scheme implementation.
> but this doesn't touch the surface of the issues that make JS hard to optimise, such as the fact that your, say, memoisation of an object property or method may be broken by an `eval` call of an arbitrary runtime value somewhere else in the code (which, due to asynchronicity, could take place at more or less any time from the point of view of your given 'peephole').
Eval belongs to a core set of features that basically all popular dynamic languages share that presents headaches for high performance implementations. How is Javascript's eval particularly problematic in this regard, and specifically much more so than Lua's loadstring/load?
More generally what do you think makes (pre-ES6) javascript significantly harder to optimize than lua 5.0?
It's my understanding that LuaJIT still beats V8 Javascript by a considerable margin. And I would say that Lua and Javascript are roughly comparable languages so it's an apples-to-apples comparison.
> V8, which is perhaps the world's most highly optimised interpreter (/JIT compiler/runner)
I don't think this is true. It's very popular but LuaJIT is still more performant.
Sorry, you're the second person to say this, so clearly I didn't phrase my comment very well. I meant that it's the interpreter which has undergone the most optimisation, or alternatively which has had the most brainpower dedicated to optimising it.
I don't mean that the resulting interpreter is the most performant interpreter in some given benchmark. Like I said in my other reply, I don't think that can possibly be a meaningful benchmark, given different languages have different grammars and semantics which have a huge effect on the possibility of writing a performant interpreter (see: Python).
I can see that you preempt this point in your comment, but, I mean, what on earth does "roughly comparable languages" mean? Comparable how? Even superficially similar languages can have semantic differences which have a deep impact on optimisability. (For instance Python's C ABI, invisible to most end users, which notoriously limits interpreter implementations. Or its bytecode dispatch. Or its attribute dynamism, precluding memoisation of lookups. The possibility of `eval` is an enormous one too, which also applies to JS.)
JavaScript itself is very powerful in its own domain(the DOM interactions in the browser) but most of the bloat comes from developers wanting to write JS in different way than originally designed. 20 years of JS frameworks and at the end of the day all they do is arranging HTML boxes through slightly different code.
But why so many frameworks for the exact same output? It usually comes down to code management and making the code writing closer to the mental models of the problem they intend to solve.
I find it very unproductive to try to force JS to fit in every domain possible because there are already very mature languages who were designed with that domain in mind. That said, JavaScript has the power of being able to run everywhere which gives it unique advantage in actually learning and running the code written in JS.
I don't have a conclusion for this. I know JS and feels great to be able to run it everywhere, especially quick and dirty solutions are almost as nice as working with PHP. However, it easily frustrates me when trying to do something better than quick and dirty and start appreciating the domain specific toolsets. I guess the moral of the story is that JS is great but let's not try to solve every problem with it.
> JavaScript itself is very powerful in its own domain(the DOM interactions in the browser)
Imo, most of this benefit comes from being effectively the only way to interact with the DOM up until WASM (and I think I read that WASM still calls out to JS for DOM manipulation). If Python had a way to manipulate the DOM via shared libraries or something, I'd wager there would be equally powerful Python libraries (ignoring security concerns).
Or phrased differently, I don't think anything about the design of Javascript is uniquely well suited to manipulating DOMs, it just has a monopoly on manipulating the DOM.
> most of the bloat comes from developers wanting to write JS in different way than originally designed
I think a big part of this is that Javascript has become a much more integral part of websites than it was originally intended to be. My impression is that it was designed to add a little bit of interactivity, not for people to ship a nearly blank HTML file and then fill in the entire page using Javascript.
That's not to say it's a bad idea, I just don't think it was designed for that and we have some warts and a lot of changes to allow us to implement that modern paradigm. I strongly suspect some form of parallelism would have been included if we had known what modern frameworks would look like.
I don't consider myself very opiniated on tech. But if there is one thing I am adamant about is that strongly typed languages are a must in any project that's meant to grow and last. It's a must to explore, understand and refactoring the codebase while maintaining your sanity. The tooling for dynamic languages is just not there yet.
So it's not a rift as much as the fact that one camp is just plain wrong.
I specified what I am talking about. Weekend projects are not even a contention point. People code them in Brainfuck and we all have a laugh about it. It's the stuff that end up at the job that frustrate us where you end up assigned to a 100k (or even just 10k) lines of javascript inferno hell. Because some dev thought he can develop "faster" in Javascript.
This is wrong; tests and type systems have different purposes. One helps prevent logical bugs and regressions in your code, while the other enforces the correct use of code and data structures. They overlap a bit, but all the unit tests in the world can’t give you the guarantees that a type system does. And that’s without even mentioning the benefits to discoverability and documentation that a static type system gives you.
> This is wrong; tests and type systems have different purposes.
For decades, companies have prospered without a strong type system and using tests for type assurance ad hoc. Usually when it's worrisome or has been problematic. You can say it's wrong (conceptually, theoretically, etc) and yet it people continue to do it. That's interesting to think about.
> The side that insists on using strongly typed languages for a weekend throwaway project?
yes? it's gonna take me 30min in a dynamic language because i keep running into "undefined" errors, or 20min in a strongly typed language because if the IDE is happy, i know it will work.
the advantage of strongly typed languages is linear, from something that takes 10 minutes to something that takes 10 years, coding in a strongly typed language is always X% faster.
put the other way: why would I want to code in a language that allows me to write impossible code? what is the point?
the only selling point of python is the large corpus of third party packages (numpy etc), and that the syntax "looks simple" and is thus beginner-friendly.
there is literally no other reason to use it apart from the above two reasons.
And don't forget the 15mins you spent choosing how to handle multiple types and future proofing by adding IOC and a DI framework that works like magic. Oh and the myriad activations and config that library devs added to their frameworks using a Fluent syntax because it looks better. No thanks.
I'll politely disagree with you and illustrate the many many many unicorns that were brought into existence under dynamic languages. And places such as Shopify which are after-the-fact adding typing to previously untyped languages.
Every single bug that has ever existed in static languages passed the type-checker. There are benefits in types to be sure but cornering the market on "growing and lasting" is definitely not one of them. If anything I'd say types let you refactor quicker - dynamic languages let you launch faster.
I should also note that I don't mean dependent type languages which I haven't actually shipped anything professionally but I'm extremely curious about.
So you launch and the you add (dependent) types. That works and is quite a good way imho: when you start writing you miss a lot of details so you want to freely experiment and when parts get clearer and become stable you can rewrite, reactor, add types and proofs.
My 2c. Typed languages justify themselves being typed by virtue of the complexity they themselves impose.
We say they help us with solving a complex problem but I'd say they are the ones that make the problem more complicated than it needs to be.
Yes they have their uses, but those are few and far between. The rest is just devs solving how to do what they want whilst keeping the compiler happy with whatever IBaseAbstract virtual template method thingimabob method the compiler needs.
Agreed. Javascript is increasingly something that is optional. You can use it but you no longer have to use it. And even if you are using it, you are more than likely transpiling it. Browser side Javascript is just a compiler target at this point. There are a minority of people not doing any transpilation of course. But at this point that really is a minority (and by no small margin I'm guessing). Nothing against that, but "native" javascript just stopped being a mainstream thing quite a few years ago.
That doesn't mean Javascript will go away but it does mean it now has to compete on merit rather than relying on it's status as the only thing that you can actually use in a browser. Many projects have already shifted to using Typescript for example. And from there to other transpiled languages is not that big of a leap. We use Kotlin-js for example. It's great. Other people use clojure, vue, elm or other languages that basically aren't javascript but that just happen to transpile to it.
Long term WASM is a more efficient compiler target. Smaller, faster, easier to deal with for compiler builders, etc. The only remaining reasons to target browser js with your compiler, is that it still has better bindings to the browser APIs. This is something that is being worked on and of course that is more than offset by being able to compile existing libraries to WASM.
WASM has a few other things as well that are still being worked on (threading, memory management/garbage collections, etc.). But long term, anything that javascript can do in a browser would be something you could do from a WASM compiled program as well. Once that works well enough, a lot more languages will start targeting browser applications just because they can. And a lot of developers that don't necessarily care about downgrading to Javascript might have a go at doing some applications that run in a browser. I predict a little renaissance in frameworks and tooling for this.
And of course with WebGL and Canvas, using the DOM and CSS is entirely optional. For example Figma has a nice UI that looks like it would be hard to match with anything DOM based. They have a bit of react on the side for the bits that don't really matter but most of the app is C++. People also run games, virtual machines, etc. in browsers. There might still be valid reasons to use DOM + CSS like we have been for the last two decades. But it won't be the only choice for doing UIs. And being limited by it like we have been is increasingly no longer necessary either.
> Browser side Javascript is just a compiler target at this point
you're a decade late to this observation. CoffeeScript came out around 2009. The world has largely been transpiling to ES5 for at least eight years now.
The fact of the matter is, JavaScript post-ES6 is a solid language. Many of those people that were using ClojureScript/CoffeeScript/etc. have moved back to JS.
2007 JavaScript has almost nothing in common with 2022 JavaScript. I think this needs shouting from the moon, based on the comments I keep seeing on HN. We used to have to use Firebug. On IE we were totally blind. JavaScript was designed to crash silently while still allowing a web page to function. Today, JS is a critical part of most pages and has sophisticated tooling.
> There might still be valid reasons to use DOM + CSS like we have been for the last two decades.
If you care at all about accessibility and general browser standards, then you're stuck with the DOM. You have to reinvent the world each time you decide to make the canvas your entire UI. It's hard enough getting browser history and URLs working and creating the illusion of "pages" in a SPA. I can't imagine the pain in the ass this would be in a WASM blob.
> But it won't be the only choice for doing UIs.
It was never the only choice. Not even in 1996. Macromedia Flash, Java applets, etc. For niche applications, the canvas is great. But the majority of the world will continue using the DOM.
Accessibile UIs are possible outside the browser as well. Lots of native applications exist that are accessible by e.g. blind people. Actually a lot of that stuff existed before the web was even a thing. All that stuff can now be compiled to run in a browser.
The majority will continue to follow the easiest path. Which is not necessarily JS+DOM+CSS anymore. And contrary to your point, there always was a lot of friction with flash and Java or other plugins like silverlight. You had to install them, you had to keep them updated, they had security and performance issues, etc.
General design by committee browser standards are kind of very limiting however. It doesn't even come close to what was possible 15 years ago with flash and shockwave. That's why games don't have browser based UIs for example. They suck. It's just not good enough if you are trying to immerse the user in a futuristic and playful game world. That's also why mobile phones have native UIs. We can do better than dom+css. Way better. And now that we no longer have to stick to just ccs+dom in a browser, inevitably people are going to make use of that.
Mostly good. We use the Fritz2 framework. It's a reactive UI framework that has a nice component library and supports styled components using a Kotlin DSL for CSS. It heavily leans on things like Flows (from Kotlin's co-routines library). We have a multiplatform client library that uses kotlinx-serialization, ktor, kotlinx-date, and a few other things.
Interoperability with javascript is possible but a bit cumbersome due to the need to map types. A typescript mapping conversion exist but it does not really work that well. We use e.g. the maplibre framework to display maps and it has issues converting the type mappings for those. We ended up doing this manually for the bits of API we needed.
Main downsides are the relatively immature tooling (though that has rapidly improved over the last year) and slowish build speeds. The Kotlin compiler has some performance issues.
I'd say obfuscated JavaScript is more obfuscated than stripped Wasm. In cases where you don't have meaningful symbol names either way, wasm's code flow is required to be significantly more structured than what you can get a JavaScript VM to accept and run. Those invariants that help the VM make sure the code is safe to run at close to native speed also halos automated tooling deobfuscate it.
> What is happening here is a big rift in the programmers community between the "I'm productive in it so it is great" and the "I prefer to use strong typing and proofs to ensure it does not break at runtime".
I think the opposite, the rift has been healing. It seemed that it was pretty big in the 2010's (Python/Ruby/Perl vs Java/C#/C++) but these days we have TypeScript, Rust, Kotlin; Java and C# are getting better too (type inference), Python and Ruby are getting typechecking.
> these days we have TypeScript, Rust, Kotlin; Java and C# are getting better too (type inference), Python and Ruby are getting typechecking
everyone caves to the types.
i think the strongest holdout is probably the LISP/Clojure corner. they have optional typing for along time, but it is not used that much afaik
many have voiced their reasons for liking dynamic typing. i only have one: faster reload cycles while developing. the other pros of dynamic typing are all over shadowed by their inherent downsides.
It's a false dichotomy. If you defer type-checking to runtime, you have a traditional dynamically typed language. Haskell has this in the form of -fdefer-type-errors, but I'd like to see this feature given more emphasis and used in other languages.
I've not seen it work (not in Haskell, not in other strong typed langs). Java with IntelliJ does quite well, but then it is VM based and not very strongly typed. Kotlin has a much better type system, but then compile times start to become much more noticeable as well.
In heard OCaml has a quick reloads in some scenarios, but I never tried.
You are right. I have no experience with Go, but that's a fast compiler for sure.
Though they miss out heavily in the typing strength. If only it had Result/Either/Maybe types in the std lib from day 1, and proper sum types with some pattern matching switch statement, that would have made the language soooooo much better.
I find Elm also has good compile times. Maybe polymorphism and/or type-classes are what is hard to optimize.
In a way, but static languages have also heavily invested in ergonomics. Type inference is the best example I think. On the other hand, there are also other stuff like immutability that's getting popular in both languages. It seems like 10 years before you had to choose between developing software fast and developing reliable software, these days it's easier to have both.
It seems like a silly point to make considering the actual content of the article.
It's talking about server-side rendering. While the title might be a little provocative, its certainly talking about rendering UI (the front end) using whatever language you used on the back end, even if it was JS.
> the second group has managed to compile more-and-more of it's langs to JS and WASM
The second group is incentivized to do so, because they still have value to bring.
JavaScript 2022 has sucked all the air out of the room in terms of dynamic dispatch, untyped PLs. Compared to Python, Ruby, and Lua, it has better startup time and throughput, and at the end of seven years of pilfering every good idea any of those languages ever had* (and a few bad ones**), it is by now equally or more expressive by every metric. It's also the most popular PL in history, with a variety of maintained and opinionated libraries for every conceivable task.
Given that JavaScript is already the ultimate flavor Objective-Blub, it makes no sense to write a JS-targeting transpiler for your favorite minor flavor of Blub. And since you're already targeting a browser, Lua's embedability and Python's C FFI are no advantages. The only remaining language in this family with interesting features that JavaScript hasn't (and may never) co-opt is Racket, with its powerful hygienic macros.
To a lesser extent, this is also becoming true for the statically typed languages, where TypeScript*** offers a better developer experience than, and has made significant inroads against the faster but similarly expressive SML/OCaml/F#/Bucklescript/Reason/ReScript family (to say nothing of Java!).
On the other hand, the camp of "I prefer to use strong typing and proofs to ensure it does not break at runtime" still has concrete advantages to bring to the world of FE dev, namely:
*** While TypeScript is technically a compile-to-JS language, it is such a thin layer on top of JS that there is essentially no impedance mismatch between TS in an IDE and interactive debugging, which makes it unique within that category.
Isn't front end software almost defined by latency sensitivity, though? It seems that after logical correctness, there's nothing more important for user experience than bounded and low latency.
120Hz+ screens are becoming common place. For the first time in decades, this guidance is no longer true -- it's reasonable to design for a frame or so of fixed latency, but to avoid jitter the latency should really be bounded within 8 ms these days.
> "I prefer to use strong typing and proofs to ensure it does not break at runtime"
This has not been my experience with C++ (segfaults etc.) or Java (runtime introspection and exceptions). In ye olde languages compilation seems to be mostly (more than 50%) motivated by performance rather than correctness. Compiled languages have been doing steadily more compile-time work and improving ergonomics lately.
There are some programmers who don't really care what tools they use as long as they feel productive. They shy away from things like types because it feels like it slows them down.
Then there are programmers who care a lot about tools because it feels like the wrong tools hold them back and cause undue stress in the long-term.
Hot take: The first type are less mature programmers :)
> There are some programmers who don't really care what tools they use as long as they feel productive. They shy away from things like types because it feels like it slows them down.
Right. The group that came up with the concept of a REPL and live in Emacs doesn't care about tools.
I do think some people feel productive writing more code, while others feel productive by finding ways to write less code (with the compiler mostly inferring and generating the error-prone and uncreative work).
* smart front end, dumb back end. This is the SPA model
* dumb front end, smart back end. This is the old school model
* smart front end, smart back end. This is the new school model
If you can get away with the SPA model, by all means do it. Liveview is not going to challenge that. What it is doing, is to give new live to the old school model so you can avoid the complexity of having both a smart front end and a smart back end for a large percentage of applications. A smart/smart system is an asymmetric distributed system, a beast to handle for gurus and novices and everyone in between.
It uses C++ to program a back-end application kind of like you would with with Qt, and then it does all the required Javascript/HTML to render your app in the browser. It is kind of like Qt for the web (painting with a very, very broad brush).
I have also tried wasm with various Rust frameworks (Seed and Yew).
However, for my latest project (triviarex.com) I ended up abandoning those in favor of React Javascript, however with a non-traditional architecture.
The downside of these frameworks for me was tooling and turn around time and integration. React has great tooling, and it is easy to do live coding. In addition, there are a lot of pre-built components for Javascript frameworks and a ton of documentation.
While there can be live coding with the backend, I guess because of my background, I like to use strongly typed languages in the backend to help catch logical errors earlier, and that requires a compilation step.
So this is the architecture pattern that I am using.
My backend is written in Rust in Actix, and each session is represented by an Actor. The React front-end establishes a web sockets connection to the actor, and sends commands to the actor, which are then parsed by the backend using Serde JSON, and handled using pattern matching. All the state and state transitions are handled on the backend which then sends the frontend a Javascript serialization of a Rust enum that describes what the state is, and what data is needed to render. The front-end basically has a series a if-else statements that match against the state and renders it. Most logical processing lives on the backend, and most of the buttons simply just send a web socket message to the backend.
For me, this is the best of both worlds. I get the strong typing and correctness of Rust in the backend to manage the complexities of state management, and I get the flexibility and live coding of Javascript and React on the front-end to quickly and interactively develop the UI. Many times, I will be testing and see that the formatting or placement looks off, and I just quickly change the html/css/javascript and have it instantly appear.
After reading the article, Clojure would probably be better suited to be mentioned than ClojureScript. The point of it all looked to be speaking to the new paradigm that Phoenix LiveView brings which is server side rendering of subsections of a page. ClojureScript, while being Clojure in a Javascript uniform, doesn't look to be used in that manor.
It's all good. Everything being said, I appreciate efforts like ClojureScript. There is just so much power in being able to use the same language and share the same models on the front and back end, but not all of us want to use JavaScript/Typescript.
It's almost like what some of us did 15 years ago, use just enough JS to load things into the DOM fetched from the backend via XMLHttpRequest, is news now. Having a Websocket open to the backend and using a front end that's aware of that instead of using HTTPS seems to be the major difference here. That difference is more about WebSockets existing now than about the choice of backend language or the server-rendered, frontend-refreshed display model.
We wrote a framework 15 years ago (almost exactly) that took rendered snippets from the servers and plugged them into the DOM; https://flexlists.com is 15 years old and uses that. It's really fast and was far easier to build with than all the manual jquery stuff. Fun times :) I should rewrite it in Liveview, seems a good fit.
The biggest difference (at least for Phoenix LiveView) is having a stateful backend represent the UI and we do away entirely with HTTP APIs/serializers/resolvers. So you get both interaction and push updates from the server at any time, without HTTP glue layers. And it actually scales. So think React's functional-reactive-templates on the server, pushing minimal diffs over the wire better than if you'd carefully written an efficient JSON API or GraphQL endpoint. Elixir is also a distributed programming language so you also get things for free like sending an update to everyone's UI across the entire cluster.
I think you're the only one to mention JSON or serialization at all. I do believe I already said websockets were the innovation, but they really literally did not exist 15 years ago for us to use. Elixir being distributed is nice. You still need the logic to get the change into everyone's session. You're just not using an external DB, cache layer, or work queue to do it.
It's amazing, really, how people feel comfortable condescending to others who explain how we did things before the current technologies existed as if we could never grasp how the new technologies improve things. Yes, this is nice, but it's evolutionary, not revolutionary.
I do dearly hope Blazor works out - I would be so happy to never have to deal with the Javascript/Typescript tooling hell ever again - but I'm not holding my breath. Political shifts inside Microsoft could leave it dead and unsupported like Silverlight tomorrow...
Compared to the Blazor tooling, the Typescript tooling is miles ahead imo. We did a Blazor app during a hackathon and I was pretty frustrated with how inconsistent the build errors were with blazor components.
What makes me most skeptical about Blazor though, is that it's shifting a huge burden to the client just to make developers happy. Even if the runtime is stripped down and compiled to wasm, it seems kinda wild to send a whole runtime just so you can run C# code, especially when there's alternatives like Rust that require no runtime.
What's funnier too, is that it's been possible to write F# on the frontend for a while now, compiling down to javascript (like clojurescript). It makes me wonder why this approach was never done with C# too, even if there are pitfalls in compiling to javascript.
I'm already seeing a lot of enterprise companies gearing-up to go all-in on Blazor. Having a single language (C#) for web, mobile and all other back-end services must be huge draw for them.
> What's funnier too, is that it's been possible to write F# on the frontend for a while now, compiling down to javascript (like clojurescript). It makes me wonder why this approach was never done with C# too, even if there are pitfalls in compiling to javascript.
There have been several initiatives which allowed you to do that. None of them really successful. I've worked with one specifically in the past (can't remember the project's name though), which even had C# types/bindings for Knockout.js.
There were a quite few independent efforts to bring C#/.NET to the web the proper way (JSIL, SharpKit, WootzJz) but Microsoft was never interested in them and for some reason chose the invest in the worst and most wasteful technical solution.
I've seen the cycle too many times over the past decade or so. It's really bad with the various desktop UI frameworks, but it happens with most everything they put out. If something doesn't get immediate traction, and doesn't have strong champions inside of Microsoft, things all too often wither on the vine and go by the board. Big bang initial rollout coinciding with Build, showing off some fancy capabilities that never quite actually are realized in the version released to general availability.
Experience, many of us have been on the PC platform since the MS-DOS, and have seen how Microsoft politics play out, killing cool technologies because politics.
Even if it doesn't work out because of Microsoft politics, I think using one language for both will be the future. In my opinion, every language will have something like that.
> Political shifts inside Microsoft could leave it dead and unsupported like Silverlight tomorrow...
I honestly don't see this happening. Beyond the open source arguments, blazor server-side mode is uniquely compelling specifically for organizations like Microsoft where there are thousands (tens of thousands?) of internal business systems that need some way to interact with, but don't necessarily need to serve 4k video traffic to the entire planet.
What does that bring to the table though? You'll still need to write html, css and javascript (or some dsl).
As a hobbyist Python dev who doesn't want to deal with frontend bs more than absolutely needed, I found my perfect stack - fastapi, svelte and tailwindcss.
With a similar outlook to yours, I landed on django, htmx and tailwindcss.
I had started down the path of fastapi, svelte and tailwindcss, but when I figured out that htmx let me use server-rendered templates, getting rid of the api and the packaging toolchain felt more ergonomic.
And if you're doing server-rendered things instead of APIs, django has a few more batteries included. (But I really like fastapi for APIs.)
I couldn’t care less about the actual language. What I do care about is not having to maintain two separate tool chains and package managers, one for backend and one for front end.
Javascript on the server-side has gone from something that was advised to never be used in production. Now it is so ubiquitous that people are trying to replace it with something new and more strict and less freedom... I am fan of modern javascript, I don’t complain or mad about the fast rate of change and improvement in the language. I kept up with all the latest javascript tools and now can build full-stack applications on mobile and web with very similar codebases faster than ever.
I am surprised that there are no browsers that can support other languages. My ideal architecture is to have a browser where you can select your front-end language interpreter, as in a Chromium + V8 + CPython + Whatever front-end processor you might want (Brython[0] achieves this, but transpiling to JavaScript).
What doesn't make sense to me is that JavaScript has genuinely been the only language for the front-end, and it has been a monopoly for many years. Of course, there are other great languages like TypeScript, but these end up anyway transpiled to JavaScript, which to me feels like mounting your skyscraper over dunes. Not to hate on JavaScript, but JavaScript has grown too quirky for my tastes and that's why I've been away a lot from front-end development.
There are efforts to fix it, with the new ECMA standards, but I don't feel it's going anywhere unless breaking changes are introduced to modernize the language. The fact that you have to "patch" your scripts with 'strict mode' on the top of the file speaks a lot of being defensive with programming.
WASM is a solution to this, except you're not supposed to write WASM yourself. I want web development to be more straightforward, like the old days, where you didn't have to "compile" or "package" anything, and you just did your thing, and that worked.
In a period where most languages are experiencing a "rebirth" as a well thought out modern language, Dart feels like the "before" waiting for an "after".
I'd be deeply disappointed if we had progressed from Js to Dart, and it's why I'm not a fan of Flutter
Groovy was never really in the building and is just awful.
Scala is like a reimagining of the building that happened to find the old foundation useful.
Clojure is like a skybridge from the Lisp skyscraper across the way.
Kotlin is the only JVM language that's gained traction while aiming to be Java but better, the rest just happen to use the JVM, but calling them "evolutions" of Java would be deeply misleading.
It gained traction on Android thanks to Google pushing it over their creptic Java implementation, while pretending anything beyond Java 8 never happened.
It will follow the path of every other JVM guest language.
I will care when the JVM gets a single line of Kotlin commit.
Google should just buy JetBrains and rename ART into KVM.
This is being too charitable to Java imo, after all we had stuff like Retrolambda at the time.
Kotlin exists because Java moved like molasses for years
If Java had moved like C# it'd be a much nicer language today, and Kotlin wouldn't be needed.
Even comparing Kotlin 1.0 to today's Java nearly 6 years later would favor Kotlin for ergonomics with stdlib, nullability, reified generics, and syntax
Retrolambda, yet another kludge, instead of invokedynamic calls.
Kotlin exists because JetBrains wants your money, that is all.
> The next thing is also fairly straightforward: we expect Kotlin to drive the sales of IntelliJ IDEA. We’re working on a new language, but we do not plan to replace the entire ecosystem of libraries that have been built for the JVM. So you’re likely to keep using Spring and Hibernate, or other similar frameworks, in your projects built with Kotlin. And while the development tools for Kotlin itself are going to be free and open-source, the support for the enterprise development frameworks and tools will remain part of IntelliJ IDEA Ultimate, the commercial version of the IDE. And of course the framework support will be fully integrated with Kotlin.
Outside Android, I will care about Kotlin when KVM becomes an unavoidable reality, until then, it can party with Beanshell, jTCL, Jython, JRuby, Scala. Closjure, Groovy, Frege, and plenty of other ones,
To elaborate a bit more, and in my opinion, a web browser provides an essential service, much like the water from the mains or the electricity that powers your home.
Without the web browser engine, no matter which HTML, CSS and JS you write, the fact is it cannot run without it.
What would be great to have is choosing which programming language runs in your browser. It'd be great for me to move away from JS engines like V8 or SpiderMonkey, and be able to run CPython (directly, not transpiled to JS) as my interpreted language. The stack could be HTML, CSS and Python, for example. WASM came to solve part of this, fortunately.
We are missing an excellent opportunity for more powerful web technologies if we don't embrace WASM.
Agreed, no reason <script type="text/javascript"></script> couldn't be <script type="text/python"></script>.
And then all the native APIs would be exposed in a lowest common denominator kind of way.
That's actually not what the article is about. It's more about the backend server managing all or most of the application state, and sending slices of JSON or even fully-rendered HTML to the browser, which is responsible only for swapping in these new slices of UI with a thin layer of JavaScript.
In other words, it's about creating a SPA-like experience with little or no custom-written JavaScript.
We use Liveview exclusively for our app (https://www.grilla.gg/) and I'm happy every day we can build stuff like this with very minimal javascript. Happy to answer any questions about what it's like to run a startup using liveview.
When webapp logic happens on the client side, that results in slow applications (n.b. for normal users on low-end hardware, not developers on high-end hardware) due to CPU costs.
When webapp logic happens on the server side, that results in slow applications (n.b. for users on unreliable residential, mobile cellular, or rural connections, not those living with fiber) due to network latency.
Doesn't Liveview (and related techs) simply make the slowness a stronger function of internet quality (as opposed to available processing power), instead of actually making the application less dependent on either of those factors for performance?
> When webapp logic happens on the client side, that results in slow applications (n.b. for normal users on low-end hardware, not developers on high-end hardware) due to CPU costs.
Its not CPU. Its bad software written by people who are poorly trained and have no leadership.
Do you really need 10mb of JavaScript and 10 seconds of load time to dynamically put a few lines of text on the screen? Yes, absolutely you do, because people don't know how to do it efficiently. This is a people problem, and not a technology problem. Hardware does not solve that problem. The actual technology is actually insane fast. As a counterpoint my personal app loads an OS GUI with state restoration using 2mb of JS code (unminified) in about 150ms.
> because people don't know how to do it efficiently
Of course we do.
We just choose not to bother because delivering features efficiently is often more valuable than shaving off a few seconds of download time. Especially when it's typically once off and then cached.
Apparently not. A few seconds is a really big deal, but that is just download time while you are also clearly not accounting for execution time. Caching code will not save on execution time.
I'd love to see this approach make more headway in the Django community. Based on the last DjangoCon it seems like the community is coalescing around HTMX.
This tool does play very nicely with Django's templating engine; you can just have HTMX re-render a particular template block on the server, and send down that updated block. The migration path is quite clean; you just wrap your "HTMX-updated" template block in a `hx-post` div.
Having not gone too deep on HTMX, I'm interested in folks' thoughts on where it's lacking vs. LiveView and Hotwire. One area I can see is performance; Elixir is going to be faster than Django, and so if you're trying to handle high session counts over websockets. But the impression I get is that HTMX is a bit more light-weight, so I'm wondering if there's usecases that can't be met with it vs. LiveView.
Other Django libraries that haven't quite seen as much uptake:
I don't think this is really about programming languages. The world of software (and specifically everything network based) is oscillating between fat servers and fat client every few years or so. To me the innovation seems to be orthogonal to this cycle and may actually be accelerated by this continuous change in perspective. Which is good!
for liveview, it's not at all about "fat servers"; the overhead of maintaining a socket over erlang is not that much (maybe a few K -- the base erlang process overhead is ~400 words), so you're going to almost certainly be doing better than, say ruby (via hotwire) which isn't really designed from the ground up to hold onto the websocket. By persisting the connection with a stateful system, you're going to be avoiding a whole ton of computation that just gets thrown away with stateless HTTP requests. So it's probably in the end a lighter server than most web backends.
This is the irony, Erlang/Elixir, despite being functional language with less and highly restricted access to stateful side effects, is really FANTASTIC at safely holding onto state and persisting it for the user, and making that model digestible for the developer.
if the hardware landscape settles for a while (in terms of the capabilities of devices and networks - which is not a crazy precondition given Moore's law has run its course) then it may be that these server/client pendulum oscillations get damped around an "optimum" of sorts.
An optimum defined both in terms of what it enables users to do with software but also how easy it is for developers to deliver it. Since simple is better than complex it feels that any architecture that does not require two distinct ecosystems might have an advantage (all else being equal).
Can anyone clarify what this quote refers to: "what really sets Erlang apart, for McCord, is its ability to preschedule processes so that the CPU doesn't get hung up on any single thread."
What's this concept called? Is it simply a matter of setting a priority level on a certain task, that way the scheduler can make sure it doesn't block?
Preemptive scheduling, as opposed to cooperative scheduling.
Basically: the scheduler can interrupt an erlang thread at any time, instead of a depending on threads to cooperate with the scheduler to see if they should stop. In go, for example, goroutines will only check with the scheduler at function calls, selects, and a few other things like that.
I believe the BEAM only interrupts after "reduction" limit is reached, and is after function calls end. The big difference is that everything is a function call, so there's frequent opportunity to be interrupted. There's no functionless for-loop iteration, for example. I think having that would be a big sticking point for scheduling
There's a really neat demo of a machine at 100 CPU% but that's still responsive because of Erlang's preemptive scheduling. There's a ton of other goodies in that video if you're interested.
I keep waiting for something better than js/ts to do frontends so I dont have to learn it. I'm still waiting... I thought wasm would have gotten further along by now.
Pretty much any wasm language will require a JS runtime somewhere if you’re going to do substantial things with it, so might as well learn it either way.
I’m not an expert, but DOM APIs seem particularly suited to JavaScript. There is probably a way to handle it via Wasm, but this is a bit like avoiding learning C when writing native code.
Ha. WASM with QtCreator and Qt w/ C++ would have put a decade of web developers out of a job had it launched in 2010. Most amusing about front end is that stuff like flexbox, nice animations and GPU-accelerated graphics based on web tech get praised and hailed as innovation yet we had that stuff on the desktop for years and years already way before the web bros rediscovered everything 2010-2020.
C++ is not nearly as productive a language for most programmers in the world as JavaScript. Qt is not a panacea, even for desktop apps. The condescending tone to web developers is very strange and not very nice.
My point is: all achievements in web dev modularity and features have been present in Visual Basic, Delphi, TurboC++, Java and Qt since the early 2000s. Fully hardware accelerated UIs to boot. Well, we decided to move it all into the browser sandbox and start from scratch. Tools like QtCreator or the VisualBasic form editor got replaced by an expensive toolchain ranging from Adobe products to 1000 npm packages.
Funny to see these new incarnations of the old ASP.NET UpdatePanel. I despised WebForms back in the day, but always had a soft spot for the UpdatePanel, seemed like a great idea. Especially when the alternative was manually building and managing UI on the client using ASP.NET AJAX.
I am not a fan of these things. Much like big JavaScript frameworks whenever you need to do something outside of the mechanisms they provide things become very difficult very quickly and you still need to use JavaScript, HTML and CSS anyway as others have pointed out.
That's why I like StimulusReflex (and Hotwire). Stimulus offers a very nice pattern for adding the small bits of additional JS you need without ad-hoc JS spaghetti, while letting you push the vast majority of the rest to the backend.
Small bit of additional JS defeats the point of doing it at all. The moment you end up having to go outside one of these things you end up with one problem or another. The fact of the matter is that you will always have to deal with HTML, CSS and JS somewhere and you can try abstract it out and it always breaks down near the edges.
Also as a bit of an aside and it is a bit of a moan.
Coming from someone that can write everything from scratch and doesn't need framework. All of these things are horrible to work with (I've had to work with a bit of Blazor) and they just make it incredibly difficult to actually find out what is going on (especially when stuff doesn't work as advertised) as they just obfuscate what is going on.
Every job requires you to know some horrendous framework these that has about 10 layers of Rube Goldberg madness in there for one reason or another. People will be surprised what can be achieved with `document.createElement()` a few classes and a pub / sub class.
In an abstract sense, yes, but we don't actually keep a vdom in Phoenix LiveView. You might want to reconsider this hot take. We actually send less data on the wire than the best hand-written JSON payload you could come up with in many cases. If you're curious how that works, I have a pretty detailed write up on it that may change your mind:
https://fly.io/blog/how-we-got-to-liveview/
I don't really get the terminology of "backend" and "frontend" languages. To me, JavaScript is just a dialect of C anyway and the browser is just an OS. It doesn't make any sense to specify any particular usage except to acknowledge that for frontend development, JavaScript is the most popular choice. And so what? Most frameworks (like Angular) are heavily-based in patterns that came from other languages.
Modern X applications render bitmaps which get shipped over the compositor and the X server to the graphics card driver. That is not a good model for the web, because it would mean the web server has to produce rendered bitmaps, specific for the fonts, window size, and display panel led configuration over the web. The browser cannot even select text or provide a search dialog. That is almost VNC.
But originally, X applications didn't render bitmaps but submitted drawing calls. Sun used to use Postscript for that. So using that approach, it would be possible for the browser to select text, to scroll, to copy text, to search text etc. A clear improvement. But resizing the browser window would still need a complete re-transmission from the server.
But if the web server sends static html, css, and a fixed JS library, which is used to replace parts of the dom, the browser can do a lot more locally. And still the whole application logic resides on the server.
> because it would mean the web server has to produce rendered bitmaps, specific for the fonts, window size, and display panel led configuration over the web. The browser cannot even select text or provide a search dialog. That is almost VNC.
So not much different than "modern" <canvas> based apps.
The gist I was able to glean, specifically about Pheonix Liveview, is that all updates on the Frontend and Backend are pushed to the other side via websockets. For the Frontend, this means sending some json payload to the Backend. For the Backend this means either responding to a request or sending out a new update both of which involve sending out a snippet of pre-rendered HTML to replace/update an existing element that was already on page.
The benefit here is that we can get to a place that is almost a SPA without needing to do a whole lot of JavaScript for the Frontend. It also helps that Elixir is built on top of Erlang which gives it a boon to be able to handle a lot of users on one server.
There is also a bundle of clever implementation which means the templates are aware of what parts are static and what parts are dynamic. So they can send optimized changes, essentially small diffs, for the dynamic parts as they change. Often this means only sending the values for your text input field, not any of the markup for the field.
When is the world of software going to wake up? All programming languages suck, in one way or another. All frontend/backend paradigms suck, in one way or another. This constant churn of new, new, new is a waste. It gets replaced every few years, meaning all the time, effort and money sucked into it is gone. And engineers re-learn the same lessons over, and over, and over again. All so software engineers can be happy, because they just like doing new things in newly complicated ways. It's like changing how we build houses every few years just for fun, with no benefit to the people eventually living in that home. We're definitely not saving them money or building them any faster.
After having been in this industry for 20+ years, I can pretty confidently say the world of software will never wake up. It's just been an endless cycle of adding abstractions, removing them again, using back end languages on the front end, using front end languages on the back end, using relational databases instead of a key value store, using key value stores instead of a relational database, unit tests over integration tests, integration tests over unit tests, silver bullet framework after silver bullet framework.
My feeling now is that in the greater scheme of things none of the technology choices really matter all that much, the main thing is if the software is written in a clear, documented and maintainable fashion. Unfortunately people will still throw that away and pointlessly rewrite it in a few years, so making the choice to contribute in that fashion is more a matter of professional pride than practical utility. Arguably it's better for your career to let go of that pride and just embrace the constant unnecessary rewrite process.
I do wonder if there are still some corners of software development which aren't like this. Perhaps in public universities or government?
This I think shows how "clear, documented, and maintainable" can mean different things to different people.
I specifically left Rails and Ruby because I failed to see any production Rails code actually meet any of those criteria. All of the metaprogramming, OOP, inheritance, and DSLs just made the code more confusing than it needs to be. Ruby is a cult of "the code documents itself", but code can never document itself, because good documentation includes the how and the why and some examples. Code is a horrible at describing how it exists, and no, unit tests are often not sufficient examples. And then there's the issue of Rails apps taking way too long to boot up despite their only job being to serve webpages. Debugging serious issues is a pain when the Ruby code takes minutes to actually run, and having neither clear nor documented code doesn't help. It's always abstractions upon abstractions upon abstractions.
But some people like Ruby because it's a beautiful language and Rails gives them an opinionated structure, and maybe that matters more to them.
I'll take simple functions and primitive data structures with detailed comments any day over design patterns with a bunch of classes to describe abstract ideas that inherit from one another and fail to self-describe.
Unfortunately in practice, there is so much ruby/rails magic going on that most Rails projects end up in terrible shape.
It's kind of a catch-22, and tons of companies have overcome it, but rails by far has the "least long term maintainable" defaults. Good for quick prototypes/small teams but bad for large and scaling teams
This was my experience with rails (to be fair, some years ago). Each new version reinvented some huge chunk of the previous version with magic all the way down. Basically why I stopped using it despite having invested lots of time.
I think it's a maturity thing. I've found myself going from wanting to use cool technology to build whatever to wanting to use boring technology to build cool software.
A few decades of programming has taught me that cool technology inevitably turns out to be janky and annoying and half-finished.
I agree it's a maturity thing, but I wouldn't say the cool technology 'inevitably turns out to be janky and annoying and half-finished.' I think it's more that the cool technology is developed for a certain use-case and, developers being the way we are, we pick up our new hammer and proceed to test if every object around us is a nail.
We also won't accept other people's test results. Sure, 80 other devs have said this isn't a nail, but are they SURE?
Maybe not inevitably, but the vast majority of of yesteryear's cool technology is obsolete and forgotten today. The remaining became mature "old" technology.
There was a time Java used to be cool new technology.
20+ years here too, and I agree to a large extent.
The conclusion I've come to is that we're making expensive sandcastles. I don't know when I start a project how big the sandcastle will need to get, or exactly what it will end up looking like. I also don't know when the tide is coming in. Some of them were pretty good, others were disasters, but they've all washed away now.
I do quite like building sandcastles though, so I don't worry too much about it.
I think no one is going to necessarily disagree with what you’re saying, but I’d also ask what is the point? It’s almost tautological.
Of course this is the case, so does that mean new things shouldn’t be made because it’s been pre-decided it would be a waste? I can’t say I agree with that, even though I agree with the premise that all PLs suck.
Maybe this stuff is just hard, and also maybe part of that is because as humans we’re pretty limited. So I think we just have to deal with it and try. I don’t love front end stuff, for example, and it kind of annoys me at times that everything gets reinvented over and over, to great fanfare no less. However at the end of the day, I realize it’s because we’re limited and one of those limitations is not being smart enough to do everything right the first time. That makes sense and isn’t like a knock on people, just the truth.
And these new frameworks and paradigms that keep getting made indicate to me that things are reaching some stability, but aren’t there yet. Like, a lot of native stuff has been literally miles ahead for years, which is why you don’t really see a ton of innovation in that spaces, aside from adding new frameworks to support new sensors. But you still see stuff getting made to support more privacy aware stuff, which is a shame because it could have been that way from the beginning.
Anyway, I think it’s fine for both to be true: PLs suck because tech is hard and people are limited, and we keep trying new stuff because it’s just both there yet.
(Note: wrote this on my phone so maybe it rambles and is incoherent or has mistakes)
Maybe once we lift the abstractions from the clutches of the Von Neumann architecture and reintroduce spatial and sensory mechanisms to our programming environments? Especially with server-client applications, the underlying topologies are often ad-hoc and brittle, which contributes to the complexity in sneaky ways. Jaron Lanier, a pioneer of VR, has a concept of "phenotropic programming" which is almost like OOP taken to the extreme - objects can only sense and act on each other in a virtual world akin to the real one [0].
If you want to really fall down a rabbit hole, Alan Kay has been espousing similar themes for years. He of course was the driving force behind Smalltalk, which did things in the 70s that still seem hard to do. Look at the examples built by middle schoolers in [1] - super impressive stuff! I personally believe OOP got a really bad rap by the bastardizations of C++ and Java, and a principled reframing of it may be what takes us to the next level of programming.
It's better for new people joining, each cycle brings some improvement over the last typically.
It also equalizes the playing field every few years, you become an expert in the latest new thing as there isn't an old guard. Some new web framework comes along, you can invest in that and become leet, where its very difficult to start new in react today and achieve that.
This isn't to say past experience isn't valuable or helps you adopt things more quickly, can make you cynical about the new shiny and sometimes miss out on the actual improvements, even if small. I
1. software is a VERY new discipline. its also one of the most malleable. I doubt we'll ever stop seeing churn in this space.
2. we don't reinvent the how we build houses each year. you can safely ignore all this new fangled web things if you want.
3. churn is also domain specific. we've more or less stopped inventing drastically new APIs at the OS/system level.
4. software for UIs have never been great. we are still learning how to build them. most of the major players in UI software still exist, windows, java, macosx (cocoa?), QT, GTK, Enlightment, and HTML/CSS/javascript.
finally software will continue to evolve as base layers add the ability for higher layers to do things differently. for example as CPU get more and more vector operations that can dramatically change how we write code.
this churn is a positive not a negative on the industry.
This. So much this. Futhermore - does it all really matter that much when most of us are stuck with delivery managers screaming: FEATURE, FEATURE, FEATURE in our faces, such that there is never enough time to use any toolset in an optimal way.
I don't think this is the case at all, maaaaybe it's a tad cynical in tone, but I think overall it's a pragmatic and realistic view of software engineering, ESPECIALLY web development.
I say that as someone who is been in the industry since just before the first .com bust, and have seen a lot of this cycle.
No offense, but you have probably not been in software development for 25+ years.
I've been doing web dev for 30 yrs and it's kind of funny to see everyone inventing server side rendering, plain old html and php again. It's a spiral, not a circle and we're a bit wiser and performant this time, but much of it has been done and gone in some way or another.
That would be impressive, since Mosaic is only 29 years old at this point :)
I don't thinking people are reinventing server side rendering. Rediscovering it, perhaps. It has always been here, along with plain old HTML and PHP. The modern toolbox of JS-based toolboxes exists for a reason: all the things that you can't do (or can only do very clunkily) with server-side implementations. That hasn't changed.
What people wanted from a web page in 1994 is so utterly different than what many web pages are expected to do now. If it's a spiral, it's one that is spiralling out, not in, as the scope expands significantly.
> No offense, but you have probably not been in software development for 25+ years.
You're wrong about this.
You and I see things differently.
You see things as never being new and "plain old X and Y again".
I try to see things with fresh eyes, yet still with the benefit of my 25+ years in the industry. I try not to make assumptions that limit my thinking about something. Some might call it a beginner's mindset, and it has served me well.
Either way, I'm receptive to the "LiveView" model, but until such a time a server can deliver both HTML and server driven native UI for mobile interchangeably, I prefer the RPC approach.
I don't want to develop:
1. Both an API for mobile and LiveView for web.
OR
2. Create my own server driven UI paradigm for mobile.
I find maintaining one architectural pattern simpler. I do want to support users with the best technology fit, but I don't see why I have to make this tradeoff. The platform holders are (and always have been) jerking us around.
I want to see a LiveView that can deliver both HTML and equivalent native UI markup. This is needed to sell the vision end to end - the world is not just web.
I feel like this would enable a more sensible choice:
Offline (first)? Use RPC with sync.
Online-only? Use the LiveView paradigm and it'll work with native or web.