I love modern frontend development. I can build apps that scale easily to hundreds of thousands of users. They are fast where they need to be fast, and building components means complexity lives only where it's needed. Static parts are rendered statically, dynamic parts are rendered dynamically. I can write all code for the entire stack in Javascript. The entire workflow is streamlined in a simple way (webpack really isn't that hard to grok). There are templates for every project imaginable. I don't need an entire VPS for my web-apps, just a simple static hosting service. My apps run on every system and any browser back to IE9 without any issues whatsoever. No complex build tools or linking process or debugging weird architectures needed. Compatibility issues are solved automatically in any modern build pipeline by adding vendor prefixes. New code is transpiled to old code for older browsers. Debugging modern front-end code is a breeze, there are an infinite amount of tools both for styling and the code itself. My modern frontend app takes seconds to build; QT apps I built in the past needed up to twenty minutes, just to check a simple change. No need for that with hot reloading. My users are happy too: They don't have to download an app which might not work on their specific system. Linux, Windows and Mac users alike can use it on their computers, laptops, tablets and phones without any issue. They can use the default environment a browser provides for an even better user experience (zooming, Screenreaders, upscaling fonts, copying images and text, preferences for dark mode, theming their browsers, saving credentials and auto filling them, sharing stuff, etc.). Integrating stuff from other companies has never been easier: Paypal, Stripe, Apple Products all provide a well tested and documented library for every framework out there. There are open source packages for anything, it's a FOSS dream. Building prototypes for anything is insanely fast, thanks to modern debugging tools in Chrome and FireFox.
> build apps that scale easily to hundreds of thousands of users
I don't understand this.
Frontend dev is about writing a portable software to run on as many runtimes as there are users. There is literally nothing to prevent any frontend software, good or bad, to "scale", because scaling in terms of users is nonsical for frontend dev (unless you consider browser compatibility as scaling, to which statement I'm orthogonal).
Note: meanwhile, back end dev is about writing a program that accepts as many users as possible, which make front and back nicely yin-yang together. But maybe I'm overstating things here :shrug:
Doing most of the work on the client instead of the server can often make scaling much easier.
I read your comment as assuming that the division between the server's job and the client's job is fixed, but as the browser has become more powerful many things have moved to the client.
Doing the work on the client can sometimes be the right choice, but I'll share a converse point. I was tasked with implementing a bulk record upload feature. The UI was to accept up a CSV file with up to 1 million records. The API provided by the backend team accept max 1,000 records in JSON.
So now I have to convert CSV, for which there's no official specification, to JSON, and then make (up to) 1,000 HTTP requests via HTTP 1.1 -- and a different back end team owns rate limiting for all ingress. I end up using a package called 'promise-limit' to ensure only N number of requests are open at a time; firing 1,000 simultaneous XMLHttpRequests is a bad idea. I end up fielding bug reports concerning the translation to JSON for edge cases. And collating the 1,000 network responses for any individual record that failed to persist was something we didn't get to building. So users were uploading 1M records, had some proportion fail, and had 0 visibility into what data the system had, and what data didn't make it.
Eventually, we gutted all of that and the backend team added a file upload endpoint. But explaining why this shouldn't be a front end feature, to folks who primarily or exclusively do backend work, it's tough to communicate (1) all the pain points of dealing with the complexity in the front end and (2) the scalability issues. "Can't you just do it on the front end" is such a frequent retort I receive, for things we truthfully should do on the backend.
The argument to make to the backend team is a bit nuanced but it’s this: Generally each PUT/POST sent to the backend should have atomic semantics. If there’s some user action (like uploading a CSV) that should entirely succeed or entirely fail, you want to send it in a single HTTP request. If it matters that it happens exactly once (like submitting a forum post), then adding a nonce of some sort is a good idea too - just in case the request succeeds but the response gets dropped and the user retries the action.
It’s often tempting to model URLs like database entries and send multiple requests to the server to model updating many records based on one action. That’s a mistake - have the client name the action, send one http request, and execute it in a single database transaction in the backend if you need to. This approach is way more reliable, debuggable and consistent. This sort of thing is absolutely the backend’s responsibility.
If you backend team doesn't hear "I might have to make 1000 requests" and immediately think "oh that probably times out" and "oh do we have to re-batch that data" and immediately make a file upload endpoint, they're committing malpractice. Good grief.
That's interesting. From my experience, the backend guys are usually more down-to-earth and often save the frontend guys from doing overcomplicated/bad stuff.
Not wanting to implement a file upload endpoint for that task sounds suspiciously like some unmotivated guy that doesn't give a sheit about the app.
> Doing most of the work on the client instead of the server can often make scaling much easier.
I've also found that it results in a non responsive webpage. For instance, text taking multiple seconds to render in an input box (Slack, Facebook, amongst others) and times where the browser tab process ends up allocating a signficant percentage of system resident memory and/or CPU resources, which makes the computer itself slow to respond.
I think gp meant that you don’t have to send a server request to handle every user interaction. The server has to handle much fewer user requests, and only return data (instead of generating the template as well).
> Partner meetings are illuminating. We get a strong sense for how bad site performance is going to be based on the percentage of engineering leads, PMs, and decision makers carrying high-end phones which they primarily use in urban areas.
This entire read seems to be that things are much better for YOU, the front-end developer.
And yes, that makes a difference, you can deliver more features, faster, more reliably.
But, if those sites just bog down and take double-digit seconds to load, even when properly deployed on scaleable delivery architectures, with fiber-optic speeds, and CAD/gamer-level power machines, they are junk.
And I increasingly see exactly this junk been over the last few years. Even (and often especially) the major sites are worse than ever. For example, on the above setup, I've reverted Gmail to the HTML-only version to get some semblance of performance.
Sure, some of this could be related to Firefox' developments to isolate tabs and not reuse code across local containers and sessions, but expecting to get away with shipping steaming piles of cruft because you expect most of it will be pre-cached is no excuse.
Your site might have the look and features of a Ferrari, but if it has the weight of a loaded dump truck, it will still suck. If you are not testing and requiring good performance on a rural DSL line and mid-level laptop (or similar example of constrained performance), you are doing it wrong.
I cant agree more. I could cry. Ive been on a crusade for over a year to scale back diddly js frameworks because 'scale'. Ive heard many bullshit excuses before but sacrificing performance and reliability for being "scalabable" is beyond mind boggling. It gives me a panic attack when medium articles are the chief architect and purveyors of software development truths. I live and work in developing countries and could never understand why sites keep getting slower around 2017 before I stumbled into medium with their React and K8s articles ad nauseum. I realised the juniors and intermediate devs are reading these things as truth and applying it as gospel.
Yup, all these rapid-dev frameworks and tools may be fantastic for rapidly developing and refining a prototype.
But that is just the beginning of the process, the job is barely half done.
The devs must then focus on exactly what is needed in the code to refactor and strip out every unnecessary instruction and byte of download.
Page load times and refresh times that are more than 3-digit milliseconds rapidly become unusable. When it is 5-dicit milliseconds, on a modest machine, you are shipping junk, and should not consider yourself a developer. Amateurs can do better.
Wow, do I miss the days of the 5KB web page competitions...
Your sarcasm is well and truly appreciated but misplaced.
The idea is that these libs and framework make an honest attempt at making web apps. Also well intentioned but wide off the mark. Most people access the internet through mobile phones[1] with dodgy broadband connectivity, hence don't need an app running inside another app - do you understand the redundancy and the pitfalls of janky, horrible UIs for the marginal performance improvement? Or the famous It doesn't reload the whole page...very few people actually care about.
If you are building the next instagram, go right ahead and use bleeding edge js frameworks, but not every site/blog needs to be an app.
Except that I'm not even sure that the next Instagram or whatever should be a web app - it should be a native app.
I saw the beginning of the "write once - run anywhere" concept decades ago. Sounded fantastic. Yet here we are and it is more like "write once - stumble and crawl someplace".
It is most discouraging that many major houses do it so badly, and even their native apps are similar bloatware...
Is still hit or miss every once and while depending on the use case. For example debugging service workers is still a bit of a nightmare. And I've also had some issues debugging a chrome extension written with Vue CLI + the browser extension plugin.
This is sadly true, but service workers are themselves hard to get right, and they tend to create subtle cache issues that are hard to notice until the deployment after their deployment to production.
I guess you are a talented developer working in the right environment. But the vast majority of sites outside there (not controlled by some tech behemoth) are usually not optimized, work on Chrome, usable only by abled people and a long etc etc
This. People see one shitty implementation of SPA and put every new tech stack in the same bucket. We live in a golden age of tooling and the front end is right at the front (pun intended).
Whenever this topic comes I only hear praise from devs. Devs only care about what they can do, never thinks from user's side. That's why we have shitty js infested sites, people who thinks all people have latest gadgets, sits in ac rooms, have unlimited fibre net just like them. Devs live in a bubble. It failed users.
There are many reasons why having SPA and rendering your site on the client is a bad idea.
First, you are basically breaking the concept of the web, a collection of documents, not a collection of code that must be executed to get a document. That has many bad effects.
Browsing is slower, you have to download the code of the whole application and wait for it to execute other API calls to the server before the page is usable. That can take a couple of seconds, or even more on slower connections. With the old pages rendered server side not only you didn't have this effect, but also the browser could start to render the page even it was not fully received (since HTML can be parsed streamed). Not everyone has a fast connection available at all time, and it's frustrating when you have only a 2G network available and you cannot do basically anything on the modern internet.
It's less secure, since you are forcing the user to execute some code on its machine just to look at an article in a blog. And JavaScript engines in browsers are one of the most common source of exploits, given their complexity. Also JavaScript can access information that can fingerprint your browser to track you, without particular permissions. Ideally in a sane world most of the websites don't require JavaScript, and the browser would show you a popup to allow the website to execute code (just like they as you access to the camera).
It breaks any other tools that are not a browser. In the old days you could download with wget entire sites on your hard driver to consult them offline, with "modern" SPA is impossible. Of course that implies that it breaks things like the Wayback machine and thus history is not preserved. And serach engines penalizes sites that are not a clean static HTML.
It's also less accessible, since most SPA don't respect the semantic of HTML, everything is a div in a div in a div. And of course you still need a browser, while with HTML documents you could process them with any software (why a blind person need to render the page in a browser? While a screen reader could have simply parsed the HTML of the page without rendering it on screen...). It breaks navigation in browser, the back button no longer works as you expect, reloading a page could have strange effects, and so on. I can't reliably look at the address bar to know the page I'm in.
Finally it's less robust. SPA are a single point of failure. If anything goes wrong the whole site stops working. While a bug on a page on a classical server side rendered website breaks only that particular page. Also error handling is not present on most SPA, for example what happens if an HTTP request fails? Who knows. Look at submitting a form, most of the time there is no feedback. On a classical server side rendered application if I submit a form I either get a response from the server, or the browser informs me that request had failed and I can submit the form again.
>First, you are basically breaking the concept of the web, a collection of documents, not a collection of code that must be executed to get a document
I think this ship has sailed a long time ago. Nevertheless, I still think that current SPA frameworks are overblown. I am partial to Javascript/HTML + WebComponents. However, I put the blame into the influx of "coderz" that in the last 10 years have arrived to do FrontEnd development.
It is very similar to the use of thick frameworks in the backend (LoopBack, Ruby on Rails, etc): There's people who do not have a clue about what they are doing but they still build SPA websites by changing some code here and there after an "npm create-react-app-shopping-cart" script.
I've interviewed hundreds of "Frontend Developers" and for the majority of them, once you start entering into the concepts of HTTP headers, why do we have CSS? why do we use font-icons and image sprites, CDN, JavaScript closures, JavaScript this/that issues, let/var differences, etc, they stare me with blank eyes.
Some of these stated issues aren't universal. There are frameworks that pair a server side rendered page with a switchover to SPA. Providing the user with a pre-rendered page with a follow on JS bundle that loads the SPA and the switch is seamless. It's a bit of extra work so I don't always use it, but it exists. This also removes the incompatibilities with non-browser tooling/bots/etc. (Example: quasar.dev)
2G is being phased out (at least where I live), it no longer works here. Chromium browsers have a slow-3g network effect to test your code/sites. I use it frequently. Sometimes it reveals problems, other times it's a shrug. Some features just require bandwidth and there is no reasonable way to avoid it. I do try to reduce page weight as much as I reasonably can.
Lots of features of the modern browser will leak finger-printable information. The JS engine is hardly the only problem piece. I guess your general complaint here is to indicate the world is insane. I tend to agree, but I don't think this is the principle reason why :)
Screen readers are notoriously difficult to make work correctly. In my experience, this is a universal issue with the web generally and isn't unique to SPAs. If we're serious about audio accessibility we need an actual standard instead of this guesswork.
Breaking navigation is a pet peeve of mine too, poorly behaved front-end routers are terrible. I always target working urls into any part of the app as refreshing the page is a common operation. And I must have a properly functioning back/forward for my own sanity in development if nothing else.
I will beg to differ on the point of failure. SPAs can also be apps, paired with a service worker and a local data set they can continue to function even if you are completely offline, or the server is completely offline. Obviously if your web app is just broken, yeah, nothing's going to work. But in the case of SSR, that's also true if your network is down (locally or remotely), or the server is down/broken.
A well built offline-first SPA will reduce the total number of single point failures that kill all functionality. Also, offline-first SPAs have the side effect of masking some of the latency over a slow 3G link - which can improve the user experience significantly.
Interesting take that I more agree with than disagree - However I think
" there are an infinite amount of tools both for styling and the code itself."
Is actually one of the symptoms of the issue at large I think.
I understand some sites depend on more complex APIs and such - certainly fbook uses the complex stuff that these frameworks do..
However many people are using wordpress and similar tools simply to have a basic static few bits of info from a few pages display decently on various devices. I too am guilty of such a thing on more than one occasion.
For these cases I am moving towards converting the designs to static html and using a php based script from coffeecup to handle the contact form part that in the past I've lazily just added a two click plugin from wordpress to handle.
I feel that the CSS standards group really dropped the ball with not having better responsive menus 'built in' as a part of the problem that can be solved in the future. Now that grids are built in - the menus that bootstrap does auto-magically for most devices is the missing piece that keeps many sites from being just html/css.
I'd love to go back to netobjects fusion for design, but it has not kept up with the responsive needs of the web. I tried coffeecups site designer in beta and it wasn't for me. I've built dozens of sites using notepad++ and occasionally load up sublimetext for find/replace - but still feel that more visual / wysiwyg type stuff is greatly desired by much of the world.
Wordpress is going more in that design direction as the gutenburg blocks start to work with full site design and not just page content. And I still keep meaning to take the time to try out pinegrow site builder - as that might be the replacement for netobjects that I've longed for.
But it's not just me - there are plenty of people who could / would make a site and find things too complex today - about 7 years ago I found someone in the top 3 search results for a home service - and inquired about their designer / seo .. The guy who was there doing the work told me he made the page in microsoft publisher.
While I'm not advocating for the bloat that Msoft's frontpage pushed into the world, and I know the time of clearpixel alignment is a distant memory, even though we have an infinite amount of tools - it still seems the front end world is more complex than it needs to be.
It is better in some ways and worse in others. I hope CSS gets better menu options so there can be less pieces needed for a decent puzzle. I like non-js sites just fine, and building with less tools is okay too.
I agree entirely. The worst sites are imo the ones not using modern frontend technologies.
I think people are forgetting how bad it used to be. Loads of jquery spaghetti code everywhere rerendering the page 1000 times with a nested for loop.
Also, web applications have became so much more complex than they were (but still work!). Things like Figma would be unthinkable even a few years ago. And - even though it's running in a browser Figma feels far more responsive than Illustrator or Fireworks (RIP), plus it has a load of collaboration tools which these desktop apps don't have.
We have to distinguish websites from web applications.
Websites have no reason to use JavaScript. They should be a collection of documents, possibly generate dynamically on the server side, but to the user HTML+CSS that can be viewed in a browser, downloaded with curl, viewed as source code, etc.
Web applications have no reason to exist. Basically we are turning the browser into a JVM, a way to run code portably. But this have nothing to do with the web! It's just bytecode loaded from a remote server. What Figma, or GMail, or Google Docs has to do with a website? Nothing.
These things shouldn't be implemented in a browser, it's not the right place. These problem should have been solved at the operating system level, but it was too difficult for Microsoft to agree with Apple and the open source world on a portable operating system interface to made possible universal binaries among platforms, with maybe a bytecode, maybe a way to load code from an URL.
No they couldn't agree, so they took the only really open standard, the web, and bloated it, basically making it an operating system (well, Google even made it really one!), what's the point? That now most people run Windows or macOS with on top a browser (probably Chrome) into which they run 95% of the software that they use. Does this make sense? To me is a huge inefficiency, that only makes things slower, slower than they were in the 90s to me (even if we have order of magnitude more powerful hardware using the GMail web interface feels slower than what it was using Outlook on Windows 98).
Put another way: "websites have no reason to execute code". Is a document with an embedded form a website or a webapp? what if one of the inputs is custom and requires some custom validation logic is the expectation from your train of thought that this logic must only execute on the backend?
Websites are interactive documents and they have been since the inception of the hyperlink. I struggle to draw a line where a website becomes a web app outside of the obvious examples at far ends of the spectrum.
As someone who doesn't do neither front-end nor back-end development I think this comment does the best job in this thread summarizing pitfalls I've observed as a user.
Although you probably (?) want to add that OS maintainers are slowly turning their repository of available OS-level applications into walled gardens, which I could see may push some developers to currently prefer web application implementations.
The web is the closest we have to an open universal platform. If you want the most bang for the buck as a business in developing widely accessible software, you build a website/webapp first.
That might change the limit on how big the UI can be, but it changes nothing fundamental. As far as I'm aware of what Figma can do (we use it at work but I use it seldom directly), you could implement the same thing with DOM elements or simply with SVG. It's not fundamentally different to the Pipes UI I did implement with Raphael.
> you could implement the same thing with DOM elements or simply with SVG
In the same way that writing a REST API is "connecting to some DB and simply exposing data via some endpoints" or building a machine learning model is "simply throwing some data at some algorithm and getting a bunch of analysis out" or devops is "simply starting some remote servers" or creating an OS is "simply writing a bootloader and then some stuff on top."
No. In the way that having written UIs that go into that direction, adding refinements to them and adding performance improvements does not change the base characteristics of what has to be written.
Remember, my comment was a response to the claim that something like Figma could not be done before the current frontend development complexity mania. But there is nothing in that collection of bad tools that is necessary to write a canvas based UI like that. It could have been done before and it most likely has been done before.
It's much better than people make it out to be.