Hacker News new | past | comments | ask | show | jobs | submit login
The “Web Application” Myth (medium.com/codepo8)
133 points by johanbrook on Feb 9, 2015 | hide | past | favorite | 87 comments



I guess I'll never really understand articles like this, or I just have poor reading comprehension. I don't see what the point of this type of thing is, it feels like a rant without any particular recommendations. It can be summed up as "don't use technology without understanding it, and don't blindly follow the crowd." But then again, how are you supposed to learn when a certain piece of technology should be applied, other than apply it and see how it goes? I certainly am not going to just take someone's advice and opinions at face value, unless they can provide hard evidence of why a certain technique results in failure. It requires a certain amount of cognitive dissonance to point to Etsy's 2000 files as a failure while at the same time they are a successful company. Perhaps these bits of incidental complexity we like to obsess over aren't, in the grand scheme of things, all that important?

To me, the notion of having strong opinions on particular techniques for web application (or web site) development strikes me as an example of a subfield having an over-inflated sense of self. Web app development and javascript frameworks are a tiny, tiny niche of software engineering which is an even smaller niche of the entire area of the medium of computing. And it's not a particularly interesting one, imho. I'm not sure why so much ink is spilled on it other than it's easy to form opinions around.


> I'm not sure why so much ink is spilled on it other than it's easy to form opinions around.

Because people read it. And people read it because social cues are important.

When you're going to commit hundreds (or thousands) of man hours to building a system you may have to maintain for years based on very, very partial knowledge, you want to know what other people think about the tools you plan to use.


> It can be summed up as "don't use technology without understanding it, and don't blindly follow the crowd." But then again, how are you supposed to learn when a certain piece of technology should be applied, other than apply it and see how it goes?

That's called research. You do it on your own time, while using more mature & proven solutions for daily work. Play with it, review all the source code. Like it or not, it's going to be your code, once you start using it in earnest. Don't treat it like a friendly black box.

Is that too much to do? Then you're going to have a lot of surprises. Maybe that's fun for you, but I have to say it gets really old over the years.

> It requires a certain amount of cognitive dissonance to point to Etsy's 2000 files as a failure while at the same time they are a successful company.

Er, no, it doesn't require cognitive dissonance at all. In fact, the folks at Etsy blogged about the issues themselves.

It's a common thing for a fast moving, highly successful project to accumulate gunk under the excuse of "it works for now, we'll clean it up later". But, if the clean up never comes, that gunk eventually becomes a noticeable drag on daily development.

> And it's not a particularly interesting one, imho. I'm not sure why so much ink is spilled on it other than it's easy to form opinions around.

If it's not particularly interesting, then why are you reading about it and responding to it? :)


Maybe I can fill in the gap?

If you are a developer, you shouldn't rely on Javascript. Yes, feel free to use Javascript to make things sexier, but the use cases where a web page should require Javascript to render properly has substantial overlap with the "we didn't think things through" slice of the Venn diagram.

If your page is unreadable with NoScript and RequestPolicy turned up to maximum paranoia, your webpage is broken.

If you're building a client-side application rather than just a web page, then it's okay if turning JS off breaks your app.


I think I gathered this, but my point is why does this matter? If I run a site that is unreadable on NoScript and RequestPolicy turned up to maximum paranoia, but am generating millions of dollars of revenue, is my webpage "broken?" If only 0.01% of my users actually are affected by such a thing, and I don't care about that, is it still "broken?" If a webpage has a missing closing </div> tag, but is never hit by a human being, is it still broken?

I guess what I'm getting at is as I've gotten older I've grown less and less empathetic to arguments that seem to appeal to an engineer's sense of purity, correctness, and craftsmanship and not much else. Tie it to user pleasure, effectiveness, business metrics, anything, but citing the idea that somehow since the web was not designed with Javascript everywhere in mind meaning we should use it as the exception not the rule isn't much more than appeal to tradition argument to me. There are definitely areas of software engineering where you can find many examples where a lack of respect for the craft, or a lack of knowledge of history/standards results in failure, but I'm just not buying the idea that if you build a SPA with a fancy new javascript framework instead of making sure NoScript works you're somehow going to be paying for that in spades down the road because it's breaking some set of rules handed down by the high priests. There are about a million other technical decisions that have more of an impact in a serious project. (For example, instead of arguing about SPA vs non-SPA, or what framework, or whatever, ask yourself: "Is anyone on the team an expert in any particular flavor of this?" If the answer is "yes", then going with any one of those is almost certainly the best strategic decision regardless of the supposed merits or flaws of that particular approach.)


Plus, it's just railing against the tide. I don't disable JS. None of my friends disable JS. (In fact, most of my friends don't even know what JS is.) The setting to disable JS is hidden away in some obscure pref pane in most browsers. It's not going anywhere, it's not optional, and pretending that a site is "broken" if it doesn't work for a handful of geeks is being willfully obtuse.


It's worth noting that "disabling JS" is not the only way to disable JS. Sometimes it can happen inadvertently:

* Bug accidentally makes it into production, preventing some crucial feature from working? JS just got disabled.

* User is browsing your site with an old device that doesn't support a feature you make use of? That JS has been disabled.

* CDN fails and your fallback mechanism isn't working like you thought it was? JS got disabled.

Not saying these are common occurrences, but I've seen all of them happen.


Actually, didn't Firefox already remove the checkbox? It's probably now only visible under about:config...


> If I run a site that is unreadable on NoScript and RequestPolicy turned up to maximum paranoia, but am generating millions of dollars of revenue, is my webpage "broken?"

I'd say yes. Usable != profitable. Hey, HN is broken, because it will "time-out" pages and you will sometimes get an error when clicking "next" - but that doesn't make it useless. (and for the arc apologists that usually respond to this - I clicked a valid link on this website, I don't care what technology is HN based on)

Also, your website would be broken if it was unusable: under anything other than IE, on machines other than PCs, on devices not made in Asia, by people older than 60, etc. None of those prevent it from generating millions of dollars.

Then again "broken" doesn't mean you should care. If your goal was just to make millions of dollars, that's fine. Just don't be surprised people call it broken.


See, by that standard the word "broken" loses any significant meaning and becomes noise. Why care about anyone calling anything broken then?


I don't think it's meaningless. It means you're making $X, but ignoring N% of potential audience. If you fix your stuff you could potentially reach them and make $X+Y.

Outside of websites: Private healthcare is broken in the US, but works for N% of people who can afford it. Many people live with the status quo. Is it meaningless to call that system broken? If it's a high enough percent, why care about anyone calling it broken? [my answer - because we're aware it could be better, but it requires more work]


How about screen-readers and other assistive software for visually impaired users? Apparently they've gotten better in recent years, but if you make a website that doesn't work without JavaScript, you're still probably screwing a lot of those people over.

Maybe they're a small percentage of your users, but it still strikes me as a dick move in the same category as constructing a building that isn't handicap-accessible (yes, I'm aware there are legal compliance rules in the building trade).


It's mostly a myth that relying on JavaScript makes a website inaccessible (partially propagated by the woefully out of date Section 508 requirements in the US - the update is currently in progress and will be very close to WCAG 2.0 [1]).

The reality is that very few screenreader users disable JavaScript [2] surprise! they're much like any other user). A properly coded SPA, using aria-* properties is as accessible to most AT users as a traditional server-side application.

[1] http://www.w3.org/TR/WCAG20/

[2] http://webaim.org/projects/screenreadersurvey5/


>0.01%

13%, actually, globally. 2% in the US (population of NYC):

http://www.searchenginepeople.com/blog/stats-no-javascript.h...

That doesn't even count people who use browsers which you haven't tested with obscure bugs, or people who have laggy and slow connections that also breaks or damages the functionality of your javascript heavy web app.


The point wasn't the number, the point was if only a small % of my users are affected by a given design decision, why should I care? Beyond that, website populations have bias and as such citing national/global stats is at best a first order approximation of the impact of not supporting non-JS visitors.


>why should I care?

I explained why. For people who use browsers who you haven't tested which break the JS in obscure ways, not to mention people whose experience breaks in obscure ways because of dropped packets and high lag.

>Beyond that, website populations have bias and as such citing national/global stats is at best a first order approximation

Ironic complaint from somebody whose first guess was two to three orders of magnitude off.


2% of my users can have javascript turned off and yet only 1% of those people may be negatively affected by the decision to not ensure all features of my site work without it.

but go on, keep missing the wider point by being pedantic.


> If only 0.01% of my users actually are affected by such a thing

All users don't have the same value. And it's also about trends.More and more users use addons that block javascript by default,just like more and more users use adblock. You might not care today,but when it's time to care, well your competitor that cared will get the users and the revenue lost because "you didn't care".


This should have a huge *citation needed. There are a million decisions you make which affect user adoption and retention of a software product, deciding to support NoScript users is a small grain of sand in that desert. Your claim that this facet of all those decisions will result in a competitor stealing market share completely ignores the concept of engineering tradeoffs. The competitor that "cared" about noscript visitors may spend many man hours (in direct costs or opportunity cost) supporting those visitors, time may be best spent building better competitive differentiators for a larger number of visitors. No design decision lives in a vacuum, and in my own experience designing to support non-JS visitors is almost universally faaaar down the list of priorities if there are good reasons to not do so.


> the concept of engineering tradeoffs.

supporting no script is as simple as not abusing javascript in the client.I fail to see how you'd spend more man hours when not investing in these full client-side javascript frameworks when your product can work without using javascript to begin with.


See Gmail for a strong counterexample. It's UX is fundamentally about speed and responsiveness, and would have been difficult to build as one of progressive enhancement, hence they needed to build a completely separate user interface to support non-JS users.


> And it's also about trends.More and more users use addons that block javascript by default,just like more and more users use adblock.

Do you have any data to back that up? In my experience it seems like the number of people who turn JS off by default has declined since SPAs became so common. It looks like the search trend for Noscript has been gradually declining since 2009, while Adblock has risen: http://www.google.com/trends/explore#q=noscript%2C%20adblock...


The big reason to avoid shipping a white page with JS turned off is performance. It implies that at some point, the JS will load and you can stop using ng-cloak or whatever else is hiding in the blank space.

It's better to pre-load actual content, or if you don't have that, fake content, then to show nothing at all.

Similarly, it's better to pay attention to back buttons so your users can navigate a website using built-in controls. This is especially important on phones, where it's easy to click a wrong link, and where back is literally a button.

Can you do this with JS? Yes. Is it obvious? Not always. The same was true with our use of CSS over images, or web fonts over flash. Eventually the industry will move in the right direction, it just takes time and the occasional rant or two. ;-)


> If you are a developer, you shouldn't rely on Javascript.

I'm sorry but I think he's saying something a little to the left of that. His overarching point about javascript seems to be that it adds complexity to any application - suddenly, page loading indicators and all application states must be handled intelligently by us, developers. His point here is that we already have an issue with developers working on code they don't have the experience, ability or time to understand. SPAs and anything else lead to a serious requirement for developer discipline.

I think some important points were, in my own words:

- Choosing a technology stack doesn't matter

- Choosing a framework doesn't matter

- What the application does and how it behaves is all that matters. Choose any tech stack that will do the job.

- No matter what tech stack you choose, your app will run into issues. It will need maintaining.

Like he says:

> What a good app needs is a team of dedicated people behind it. Good apps are good because talented people cared about them and pooled their skills.

- No amount of abstraction will help. If the abstraction is not understood, it will also hurt.

But please feel free to agree or disagree with me. I'd love to know what you all thought.


Choosing a tech stack and framework DOES matter greatly. Some technologies make development a lot easier and maintainable. Otherwise everyone will still be using PHP to write their websites. Abstractions does help greatly and frameworks are a good way to get your entire team to think about things using the same abstraction.

The platitude about needing a "talented", "passionate" team is vacuous and meaningless. You might as well write the key aspect of a good app is very little bugs. Absolutely meaningless statement. It's very obvious a bad developer with a good framework will lead to bad output; no need for a 5000 word article on it.


Fair points. I'd just like to bring your attention back to the point that

"What the application does and how it behaves is all that matters."

I'm hard pressed to find a rebuttal to this, if only for the reason that implementation is the goal. No one will ask you to write an application in Python, "and who cares what it does". The very opposite is true - no one cares what it's written in as long as it's the necessary implementation.

From here I would like to address my "point" that tech stack and framework don't matter. I think that we're actually on the same page here save for some sloppy wording on my part. I concede that on a per-project basis framework and stack matter. They quite clearly do, and I agree with you.

I would still like to argue that they don't matter as much as we like to think. To use the example you gave - PHP - depending on your purpose, your required implementation, PHP is still a powerful and useful tool. It's easier to work with than many other technologies for certain kinds of solutions. At the same time, it might be a better idea to write a solution on Rails for any number of reasons. These are specific to each project. At the end of the day though, you can successfully implement any number of solutions on a great many different stacks with no great change to implementation. I can write an application and use either PHP or Rails for the backend, Angular or Backbone for the frontend framework if I have one, and for the end user there will be ultimately no difference. I can run a PostgreSQL or a MongoDB database and they won't be able to tell the difference. If this is the case then, what makes a good application? It is the people writing the code, architecting, and meeting the challenges in ways that work with the chosen frameworks and technology stacks. It seems to me that the end result, then, is reached by people and not by stacks or frameworks.

Sorry for the long reply. No harm if you don't reply to it.


In my PHP example, I was referring to programming frameworks in general. Not just front-end ones. Many apps have moved away from PHP as developers recognized its weak abstractions which lead to higher maintenance costs. For front-end, you can simply replace PHP with Java pagelets or jQuery soup.

You can write the same web app (from the user point of view) using Python, Ruby or PHP (frontend: jquery, backbone, angular). I do not disagree with this. However, the choice of frameworks is still very important and will determine your speed of development and any future maintenance.

Saying frameworks are irrelevant since they have the potential to all lead to the same outcomes seems a bit much. Any turing complete language can implement program as another turing complete language, but why do we have assembly vs Java vs C vs Python? Higher level programming languages and frameworks are for the human brain. It turns out the the human brain grasps some concepts better than other concepts: we call the more intuitive concepts abstraction.


Of course people are more important than tools but that doesn't mean tools are unimportant. In same cases the tools are VERY important. There are for example things that are easy to do with javascript that are impossible or almost impossible to do with only server side code. If you feel that people are so much more important than tools that the tools almost makes no effect at all please write more about people then. This article wrote almost nothing about getting the right team, getting people to work together, getting the wishes or expectations from the customers etc. It seemed to say that javascript is overused (which it might be) but with a lot of words.


I've worked on projects that have generated hundreds of millions of dollars, all written in shitty PHP code. Was it a big pain for us developers to maintain it? Yes. Did our struggle matter? No.


Well put. Very succinct.


(Almost) everyone is still using PHP to write their websites. That has nothing to do with front-end frameworks. Except for a couple of new frameworks that are JS on client and server, which are less than niche at this point, all front-end frameworks work with a PHP back-end.


What's missing is the why. Just asserting that all JS-driven sites are broken and should use progressive enhancement doesn't really translate into real benefits.

Using a system that spits out JS and saves me time and effort is a win. Why should I trade that out?

I agree for some users, a properly gracefully degrading system is nice. I like when sites I use are properly URL-based, when I can share links and they just work. But that's rarely a reason I'll switch sites.

So until there's some solid evidence that relying on JS translates into lost money or other serious damage, it's simply not worthwhile and declaring such things broken by fiat accomplishes nothing.


If your page is unreadable with NoScript and RequestPolicy turned up to maximum paranoia, your webpage is broken.

Building a broken web page is fine if you choose to build it that way. If 1% of people can't see your site, or 10%, or even 99%, then that's your choice as a developer. If I want my business website built with WebGL, asm.js, HTML5 audio and WebRTC then that's ok. I'm allowed to do that even if you can't render it. As a user you have no implicit right to use a given website if your choice to turn something off or to use a browser that doesn't support something means it won't work for you. To draw an offline analogy, I have no right to build an application if I choose to remove make from my computer.

There are exceptions - users who need special access technology like screen readers should be protected from developers who build in ways that mean they're put at a disadvantage for example - but ordinary people who choose not to use something like js have no such protections. Using a screen reader is usually not a choice.


>If you're building a client-side application rather than just a web page, then it's okay if turning JS off breaks your app.

I wouldn't even say that this was necessarily true. Client side applications that can be made perfectly functional without JS should be made functional with JS.

I don't want my online banking website or my tax return web apps to use javascript at all. They do, though.

The fact that people make blogs, whose only real function is to serve static text, render with javascript is nuts.


Why should your bank spend more money and effort coding and testing both a JS enabled version plus a non JS version?

The idea of no JS gets worse when you start applying the idea of just making an API, then having the client just use those APIs, whether they're browsers, backend systems or mobile clients.

Requiring two separate client UIs to accommodate no-JS is unlikely to be a worthwhile cost for many sites/applications.


>Why should your bank spend more money and effort coding and testing both a JS enabled version plus a non JS version?

My bank should not create a site that requires javascript at all.

It increases the attack surface, it increases the likelihood that there will be bugs lurking and it increases the likelihood that the site will fail under circumstances where the user has low bandwidth and lots of dropped packets.

Furthermore, it is not necessary for a working site that does everything that I want. At all. So what is the point??

>The idea of no JS gets worse when you start applying the idea of just making an API, then having the client just use those APIs, whether they're browsers, backend systems or mobile clients.

Which I would not advise doing under any circumstances. Sharing common REST APIs between browser, backend systems and mobile clients sounds like a great idea until you consider that they all require different modes of authentication, authorization, access policies and security features and the like. Are you going to force your backend systems to authenticate using cookies and use the same CSRF protection?

Having separate APIs need not mean that you duplicate a lot of code, so I don't see the need to try to accommodate multiple different use cases using the same APIs.

>Requiring two separate client UIs to accommodate no-JS is unlikely to be a worthwhile cost for many sites/applications.

Requiring JS for many sites/applications is unlikely to be a worthwhile cost, especially considering the additional testing required to make it work equivalently well to a non-JS enabled site/application.


In general, the thing that draws people to building a JS-driven site is because it allows faster responsiveness for users, and a more versatile interface that behaves more like a application and less like a document. I think by now this much should be obvious. If the standard HTTP request/response approach resulted in the responsiveness in feedback we see with AJAX, then we wouldn't have been having this discussion for the last 10 years.

So in short, if a JS-oriented banking website results in a user being able to do their banking more quickly, this is a real benefit and is "the point." You can argue it is not worth the tradeoff, but it's a bit odd to first admit you don't see the point and then go on to say it's not worth it. If you don't see the point, how can you make claims about if it's a valuable thing to be doing? If a bank has 50k daily active users and it shaves 1-2 minutes off of their individual session times, you're looking at saving approximately 50 days of time for people every day. And most banks probably have way more than 50k daily actives. Combine this with other potential benefits like users being able to get enough done in a single banking session to avoid a second visit later, or the user interface being less confusing since users can fall back on desktop metaphors, the amount of saved effort could become truly staggering, worth the additional risk and engineering time of developing a SPA-based solution.


>In general, the thing that draws people to building a JS-driven site is because it allows faster responsiveness for users

The reality, however, when you create sites like banking websites is that it is more often slower.

Google maps is the exception, not the norm.

>a more versatile interface that behaves more like a application and less like a document. I think by now this much should be obvious. If the standard HTTP request/response approach resulted in the responsiveness in feedback we see with AJAX, then we wouldn't have been having this discussion for the last 10 years.

We've had the NoSQL discussion for an equivalent duration. Ironically, the source of this discussion was the same in both cases - that because it worked well for Google's specific requirements, it should be used everywhere. That proved to be an unfounded assumption.

>So in short, if a JS-oriented banking website results in a user being able to do their banking more quickly, this is a real benefit

Unfortunately, it doesn't usually result in a faster website. The banking website I use that makes heavy use of javascript is actually slower than those that do not.

>You can argue it is not worth the tradeoff, but it's a bit odd to first admit you don't see the point and then go on to say it's not worth it.

The point was fairly clear: unless you can make a significant speed gain (and more than you think can not), then there is no trade off. It is universally a bad idea.

>If a bank has 50k daily active users and it shaves 1-2 minutes off of their individual session times, you're looking at saving approximately 50 days of time for people every day.

That's the second time on this thread that you've pulled a number right out of your ass.

>Combine this with other potential benefits like users being able to get enough done in a single banking session to avoid a second visit later, or the user interface being less confusing since users can fall back on desktop metaphors

Then there are those AJAX websites that break the back button. Users LOVE that.


Why do you keep focusing on the numbers in my hypothetical examples? If you've run a large enough website, with enough users small optimizations end up being magnified into real effects. That's the point, and you're missing it again by being pedantic.

Beyond that, specific claims like "when you create sites like banking websites is that it is more often slower" are things that require evidence, and are much more in the category of 'pulling something out of your ass' than me very clearly constructing reasonable hypotheticals, not making claims about what would happen on a specific site or situation.

fwiw: I have run lots of A/B tests and in many cases converting things to AJAX-y, less HTTP-y solutions increases metrics meaningfully (like being able to add things to a list in a single click, instead of a full HTTP POST), and in many cases it does not (like in cases of infinite-scrolling through results.) Go figure, context matters and making flat-out claims is dumb. It's perfectly reasonable to believe there is some significant chance that a highly interactive site like a bank could see real meaningful changes in user behavior if it avoided doing full HTTP postback for all its pages.


So if my bank didn't use JS, the already drawn out processes would take even longer. Transfers would require more clicks, more full page reloads. No benefit to the user. In fact, the opposite. The user just feels their banking system is even older and more outdated. So your use of "not necessary" is rather debatable and probably not the same as the PM's.

As far as security on the API, I'm not convinced. Don't use cookies for authentication -- is that a big problem?


I disagree. Backward compatibility is sometimes necessary to keep and sometimes necessary to break. And the standard of browsing is evolving, with or without such efforts.


> “All modern websites, even server-rendered ones, need JavaScript” — No. they do not. They all can become better when enhanced with JS.

This. Surfing with uBlock with strict javascript settings leads to a surprising amount of broken pages that do nothing with javascript that is mandatory. From graphical gimmicks that get stuck covering the page to breaking buttons on search forms, there is a lot of stuff that could work just fine, without the scripts adding any real benefit.

I get that for more complex use cases there are tradeoffs to be made (e.g. implementing a server-side AND a java-script version of functions, adding dynamic content to staticly hosted pages, ...), but there also are a lot of low-hanging fruit.


I can't count now the number of pages I've visited that return nothing but a blank screen with scripting disabled. I've become used to it recently, but it's often for sites without much going on. It's a little ridiculous.


The best part is when they seem to load and display just fine, until at the very end of the document where there's some element that'll suddenly cause everything to disappear...


Hah, those are fun to come across. It then becomes a test to see if you can reload and cancel the reloading at the right moment to prevent the offending last element from loading.


The amount of blogs that require javascript for the designer's special snowflake layout are what kill me.


Welcome to my browsing experience using a 3 year old phone.


The incredible complexity of the machinery behind many rather plain web sites is embarrassing. There are a lot of web pages which would look exactly the same if implemented in HTML with no Javascript. Except that they'd load faster.


When I make web apps today, the HTTP is only used to serve the client application. Basically, it would be equivalent of

  wget http://mydomain/myapp
  make
  run
But with the benefit of running in a (browser's) protected environment .. The above would be OK in linux running jailed, but a possible catastrophe in Windows where you basically run everything with root (although that has and will probably change in later Windows releases).

Once the "client" has been downloaded and running, all communications are done via Websockets. So it's no longer a HTTP-application or "REST".

What makes it so convenient is that the user can both download and run the application in ONE click (no install, yet cached). Without worrying about malware, so I don't need their thrust. And the client can run on basically all devices and OS's, so no need for porting.


Installing things has gotten so fast and painless. Why not skip it entirely, and make a phone that has every app “installed” already and just downloads and runs them on the fly?

https://xkcd.com/1367/


The react conference videos are great. The technology seems awesome. But there was one quip that caught me off guard, the presenter said something like: "if your application still has urls to back up every action, like it's the 90s...," I instantly thought about how and why that would be a bad thing and how and why progressive enhancement is a great thing. I look forward to a time where JavaScript performance is as "free" as CSS, but we are no where near it.


The quote in context was referring to how thanks to JavaScript rendering server-side and client-side using the same routing, where every action had a URL (like the 90s), you could then browse the site powered by JS, without JS or until it actually loads. It went on to talk about how cool URIs don't change ... JavaScript performance is thus free because content is delivered in pre-rendered HTML, then JS is loaded for further manipulation later, and if it doesn't load in time, the site still works. The "like the 90s" comment refers to using JS to behave more like static webpages do, which is still a novelty for some single page apps. The funny part is we probably don't even know how common it is -- if a website doesn't break our understanding of the web and also renders server-side, how would most know that JavaScript is even involved?


> I look forward to a time where JavaScript performance is as "free" as CSS, but we are no where near it.

Actually, it seems the converse is true. Ray Cromwell recently commented[1] regarding Google Inbox:

> It's nothing to do with JS code execution, the problem is in the rendering engine, essentially too much repainting and rendering stalls, and too little GPU accelerated animation paths.

> Javascript is in a very good state these days in terms of cross browser compatibility, and while CSS support has converged between browsers in terms of correctness, CSS support has not converged on performance between browsers. Safari, Chrome Firefox, and IE have wildly different animation performance hazards on the same markup and debugging this often isn't trivial, for example, finding out that the GPU is stalling on texture uploads deep inside of the render loop.

[1] https://news.ycombinator.com/item?id=8999716


The problem with web development isn't that there is a resurgence of bad development practices. It's that the average web developer isn't as skilled as what they need to be.

The average U.S. worker spends 4.6 years in a given career, and yet it takes ~5 years to master a framework. The average computer science grad earns much more than a web developer, so the skill set required for proper development is often lacking. Add into the mix cheap offshore labour, poorly made "out of the box" web packages aimed at medium-small business, and inexperienced "geeks" who build poor websites on the cheap.

Given the skill set required for professional level web development is on par with software development, it's no surprise the role isn't getting the skilled people the career requires.


> yet it takes ~5 years to master a framework

Are we talking about web frameworks? Because it sure doesn't take 5 years to master most web frameworks.


If you apply yourself, I agree. The problem is really that, when you're just trying to get a job done, you often don't run into all the edge cases and weird shit that can happen or even some of the less common error states.

It really comes down to how complex your project is and how integral the framework is to it.


> yet it takes ~5 years to master a framework

kinda hard when frameworks completely refactor themselves or fall out of style within 18 months


> It's that the average web developer isn't as skilled as what they need to be.

The problem isn't skills, the problem is developers wanting to use their shiny new toys just to show off.It has nothing to do with skills. It's how people use the tools they have at their disposal.

A larger problem in my opinion is the victory of HTML5 against XHTML2. XHTML2 would have solved a whole lot of problems developers are still trying to solve by abusing javascript or writing specs like web components that just look like "the vengeance of XHTML2".


Okay, I'll bite. Why not use XHTML 1.1 if you want to add your own validating DTD? The reason we went HTML5 is because the validator mattered only until it stopped being important. People realized that, for end-user use cases, they want to see website content in their browsers, not "this page is not valid XML" error messages.

And weren't web components pushed aside to HTML 5.1 since it's not finalized and we wanted to call work on "HTML 5" completed? And I fail to see how XHTML, being simple markup, would do anything to assist with an AJAX call. It's HTML5 that you have to thank for the async and defer attributes for scripts, that reduced some of the need to use anonymous functions to do the same thing. Effectively, HTML5 has been as much about speeding up browsers as speeding up web development--and in the process, the user's happy too because they have no idea what's happening inside their browser, but things stay compatible. After all, no one was updating old sites to be XHTML-compliant and yes, there were too many non-validating sites. So we just stopped worrying about validation as much, as far as XML goes.


>The problem with web development isn't that there is a resurgence of bad development practices. It's that the average web developer isn't as skilled as what they need to be.

5 years of experience with angular wouldn't teach you that there are certain use cases where it should never be used.


> it would be interesting to learn what lead to over 400,000 lines of CSS in over 2000 files for an actually not that complex site in the first place

This is so common. Lot of time, they use CSS to change the design of the module, which is how it should be. But when they revert the look of the module, instead of deleting CSS, they add new rules to overwrite existing CSS. Which leads to specificity fight and !important.

At my previous job, site was rendering on server. It was old, and we wanted to redesign it. I wanted to keep rendering on the server. CTO and VP wanted to build Angular site because everyone is moving to Angular and its easy to find Angular developers. Company's 30+% of new visitors comes through SEO.


> At my previous job, site was rendering on server. It was old, and we wanted to redesign it. I wanted to keep rendering on the server. CTO and VP wanted to build Angular site because everyone is moving to Angular and its easy to find Angular developers. Company's 30+% of new visitors comes through SEO.

I think I missed the point of your second paragraph, I'd be interested to know what you meant to say. (currently working with Angular on a small hybrid app and thought about experimenting with it on the web but SEO concerns were the first thing that came to mind, so it feels like a relevant piece to me!)


I think he's pointing out the extreme ridiculousness of doing things just because they're what everyone is up to- if you're doing ng-includes and using ngRoute client-side for the public bits of your app, you're going to want to consider either a phantomjs-based snapshot service or refactoring to render the HTML on the server (you can still work Angular in w/ this latter approach, it's just trickier). Most of the Sails apps I've done have been server-rendered pages for the public sections with >=1 SPAs for the logged-in experience. In my experience, this is the most practical approach and lets you get some of the best of both worlds.


The implication is that without server-side rendering, SEO would drop. This isn't actually true for Google (edit: sometimes!), since it executes JavaScript, in part to prevent SEO gaming of its platform. But it would likely harm other web crawlers unless care was taken to incrementally add Angular to the existing HTML pre-rendered, or to execute the Angular.js server-side for bots and a nice page load speedup.


@lstamour Careful! We tested this theory last Fall with sailsjs.org and it turned out to be patently false-- we went from the second result when you google "Node.js framework" to not even being present. When I turned our headless browser snapshot stuff on again about a month later (we use brombone), it worked itself out. Needless to say, sucks :/ But everyone should be aware of the reality.


Sorry, that's true. I should have been a bit less forceful on that one. I know personally that there's a lot which goes into Google's rankings -- for instance: mobile optimization, SSL, and (where JS can slow things down the most) page speed. Google wants the web to be fast, and if two websites have identical rankings, it will rank the faster one higher. Waiting on JavaScript delays the "time to visible content" by a significant amount.


From my experience, the state of javascript for what's ostensibly the best web crawler in the world is still "it might work".


I know google is experimenting but I have never seen any JS rendered page in the google search result.

My mobile hacker news client http://hn.premii.com/ ranks top on Google but not because of the content. Content is not indexed by Google.


If your site is going to rely on search engines like google, and bing, you should think twice before committing to angular or client only app.

- JS only site won't rank higher on search engines.

- Facebook share/Open graph won't work

Or any site that relies on getting HTML from you to index wont work.


>At my previous job, site was rendering on server. It was old, >and we wanted to redesign it. I wanted to keep rendering on >the server. CTO and VP wanted to build Angular site because >everyone is moving to Angular and its easy to find Angular >developers. Company's 30+% of new visitors comes through SEO.

IMV most people don't understand the tradeoff's they're making when they adopt Angular - I'm seeing plenty of sites that have huge start up times due to them using Angular when they could equally achieve a better experience without it.


And what's even better is that Angular is throwing everything out an starting over with 2.0.


Well, they don't know that!

All our servers (API/DB) are slow, and they think that Angular will free up resource (Because not rendering on servers), and using "free" client resource to render, it will be fast.


(This is a reply to a nested comment, but I felt it was important enough to be top-level):

> and pretending that a site is "broken" if it doesn't work for a handful of geeks is being willfully obtuse.

This is the common rallying cry of the 'use-JS-for-everything' camp. The most important reason a user should have an internet-wide blacklist on JS as the default is because the default should NOT be to allow any website ever to run arbitrary scripts on a user's machine.

That's like always running as root while on a development server. No, wait - that's like having random people from the street have root on your server for arbitrary amounts of time, and all you can do is watch. Principle of least privilege certainly applies to the web as well.

Let's not even mention how an open and private web is hindered by orders of magnitude from Google analytics and Facebook like buttons tracking you across entirely different spectra of websites simply because the sites you're visiting has them embedded.

---

EDIT: Can the downvoters please reply with constructive criticism.


I think it's worth noting the difference between a website and an application. I like it when docs, news sites, blogs, etc. work without JavaScript. I consider it very important that my own blog work without JS.

If I'm making a game that's not turn-based, going JS-free is literally impossible. For a lot of applications you could in theory make a JS-free version, but it would require making a COMPLETELY separate implementation of the application, and for a lot of people that's just not justifiable. Most people simply don't have the time and resources to achieve this, so obviously they'll favor the larger chunk of users.

For example, let's say I want to make an image editor. I can imagine some ways in which I could possibly implement certain functionality without any JS, but the experience would be ABYSMAL. Seriously, consider implementing even a MICROSCOPIC subset of the functionality provided by Photoshop with JUST server-side rendering.


Word. Blacklist-all-by-default should be the strategy, not haphazard whitelist. Very similar to how you're asked if you want to share your location/microphone/etc with a website - JavaScript should likewise be compartmentalized based on functionality and necessary access permissions should be required of the user on a per-site basis first.


I always take one of two extremes: either the website has to work without JS so I need to make it so that everything works when JS is disabled, or I'm going to assume JS is enabled and I'll be going all-out.

If you're making a website, I'd say you should try to make it work without JavaScript, and in a lot of cases it can be achieved without that much effort. Blogs, news sites, docs, etc. are typically easy.

However, if you're making an application, I'd argue that it's pretty much impossible to do it without JS. It's possible if you're willing to implement your application multiple times (once in a JS-heavy way, and once in a JS-free way), but that's not feasible for most people. The other possibility is implementing it in a way that's friendlier to JS-free users, but for any non-trivial application that'll lead to a really shitty experience for most users. You just can't do sophisticated interactions when you're making something JS-free.


I haven't written a single-page application, but when I published one of my websites last year, I wanted to make sure that as much of the functionality as possible was available for users with Javascript disabled or unavailable. My target audience includes users of cheap phones in developing countries, and while I have heard a lot of developers say "less than 1% of my users don't use Javascript", I believe the numbers are a lot higher in developing countries. I cannot remember the estimates off the top of my head, but I remember it being a significant number. While cheap smartphones should shrink the gap in the future, I didn't want to be contributing to a systemic bias against users in developing countries. It's important to note, though, that my website was a side-project with no profit-motive, so I didn't have a deadline to meet and spent plenty of time testing the site for those edge cases.


You should always know your audience. The problem I feel is that some recommendations says something like "always make apps that work without javascript". If you know that about 99% of your users have javascript that may cost a lot for the last percentage. It is also diffrent for public service apps from authorities that people are required to use and a new product/service you want to sell but where there probably are or might be competition that does things differently.


Christian Heilmann makes a lot of thinking errors here:

1) Because a technology fails in the hands of amateurs or learners, doesn't mean the technology is bad.

2) He assumes webdeverlopers don't think about the consequences of a JavaScript only website. In my experience, that's not the case.

3) The fact that there's a lot of talk about JavaScript frameworks does not mean webdevelopers are less interested in the end product. It means JavaScript and everything around it is in flux and improving every month. And the decision which framework to pick is very important. It can mean the difference between a stalled or thriving end product one year later.


I still fail to see why we should not use Javascript for everything.

> "That is the great thing about web technology. It isn’t clean or well designed by a long shot — but it is extensible and it can learn from many products built with it."

That _was_ the great thing about web technology: 'worse is better', but it is no longer an aspirational goal. As a developer I need a well-designed web to build on, and these modern frameworks are doing exactly that.

> If we do everything client-side we do not only need to deliver innovative, new interfaces. We also need to replicate the already existing functionality the web gives us. A web site that takes too long makes the browser show a message that the site is not available and the user can re-try. When the site loads, I see a spinner. Every time we replace this with a client-side call, we need to do a lot of UX work to give the user exactly the same functionality.

There is no "Loading" spinner anymore on a well-built JS-heavy application. Good frameworks (React, Ember with FastBoot) use Javascript on the server to send a fully rendered HTML to the client. That works. And they rehydrate this HTML with JSON data and client-side logic so that any further JS interaction is smooth and can be done using the conveniences of the framework.

But if were to follow the gist of what the article is trying to tell us, we should instead be rendering HTML using typical server-side technologies, and use progressive enhancement to add dynamism in the client. This is not a good solution: we have to duplicate rendering logic on both the client and server using two completely different stacks. It is a lot of cognitive load, needs duplication of effort, and is a maintenance nightmare.

A better solution is to simply render everything using Javascript, and remember that Javascript is no longer a client-side technology. Use the same Javascript to render contents both on the server and the client and rehydrate the rendered HTML transparently on the client.

I also disagree with the author's implied assertion that people who use conveniences like SASS do it more because of incompetence in wielding CSS. The kinds of abstractions CSS promotes are selectors, specificity, and cascade. They are not the kind of abstractions one needs to build a maintainable and reusable body of code. We programmers know what they are: variables, modules, objects, control structures, expressions.. SASS provides some of those missing pieces: variables, mixins, conditionals and expressions. In fact SASS pushes CSS closer to a programming language than CSS is, and that is a good thing.

The article closes on this note:

> A lot of our work on the web goes pear-shaped in maintenance. This is not a technology issue, but a training one. And this is where I am worried about the nearer future when maintainers need to know all the abstractions used in our products that promise to make maintenance much easier. It is tough enough to find people to hire to build and maintain the things we have now. The more abstraction we put in, the harder this will get.

I'm as close to an abstraction-hater as the next person. But there are good abstractions and bad ones. Mutating DOM directly using spaghetti Javascript? That is just no abstraction. Once we understand the rendered view as a function of state, then it makes sense to have abstractions based on that idea (like one-way or two-way data binding).

The need for training is not a point of contention. But assume we have competent developers working in good faith, and they still find it hard to write well-maintainable code for the web, then it has to be something else that is broken. Having used these newfangled frameworks for a while now, I do think that is very much the case. Anyone should be able to put together reasonably written web-apps without being masters at the craft, but it has so far been hard, not because of people, but because of tools.


I'm not an expert, but I know that server-side JS is completely feasible approach here. However, it may truly not be the best tool for the job. Client-side JS being required is a fundamental problem for all sorts of reasons, such as accessibility and more.

One issue is: plain old HTML and CSS are better tools for regular people to deal with to lower the barriers to wider participation in the web. Having to learn JS to make sense of a website is an extra burden.

To me, it comes down to: see all the complaints about brokenness when client-side JS fails or is off. Building things with client-side JS as a requirement is unacceptable. Now, ignore client-side. Can you actually make the case that server-side JS is superior to other frameworks and languages? It shouldn't be used just because it's the same language as the client-side enhancements. If it's not the best option server-side, go with the better options instead. But I don't know enough myself to make that judgment, I'm just talking about the way to think about it.


Complicating matters further is that "best" languages probably don't exist: or rather, the best option is likely a combination of apps and frameworks written in different languages. Plus there are lots of languages that compile to JavaScript now, which means you could support "isomorphic" apps and still use a language your team might be familiar with. This all just complicates things further.

I agree this is more about people not caring about all possible use cases because they focus on the performance of just one browser or just one platform to get a working app. It's hard to move from working proof-of-concept app to a fully-developed product that you can build on in future that is entirely multi-device and cross-browser. Then once the technical details are in place, ask yourself if you're building "the right product" and you really do need an entire team.

I'm not entirely sure I'd blame over-reliance on client-side programming however. To me that's a symptom not the cause. It'd be like blaming CSS Media Queries for the fact that most mobile sites purposefully hide functionality and are less useable on phones than their desktop counterparts. The problem isn't the technology in as much as it's a lack of understanding in how it can be best applied.

There is progress however. It used to be we had lots of websites putting text in images, using tables for layouts, or using JavaScript for interactive drop-downs. It's nice to know that as an industry, over a decade, we moved from 0% CSS to everyone using CSS and perhaps over-using it. I'm sure within the next decades we'll see an emphasis on all good practices, until the point where the web is again unrecognizable: just as today's developers probably wouldn't know where to start with spacer.gifs, it's possible in future we'd question anyone who doesn't serve pre-rendered HTML with modular JS serving for performance reasons. Either that, or SPDY makes it all unnecessary. I'm of two minds ;-)


So disclaimer I'm a little biased towards Node. It brings me tremendous philosophical zeal.

You're right about server-side JavaScript-- there are so many more useful features in other languages, and things are more expressive (I like Lisp and Groovy, for instance). But the beauty of server-side js is that it's our esperanto. I'm interested in seeing a new generation of kids and senior citizens participating creatively on the web. But in order to do that, we need to standardize to a single programming languages across use cases, disciplines, and preferences. And JavaScript, whatever you think of it, is the best opportunity we've had.


This is a fairly old argument. The single universal programming language that anyone could pick up and use led to BASIC on the low-end and PL/1 on the high-end. Neither is used much anymore. Neither appealed much to a wide audience of non-programmers, though BASIC came close. And neither were anything near universal even in their heyday.

If server-side JS is "our esperanto" that's damning with faint praise. Esperanto is not a universal language, it doesn't fill any actual need, and it hasn't encouraged more people to read or write.


You mean it's our "lingua franca" — the thing that isn't better in any way, but is just used universally enough that everyone ends up using it. I don't understand the metaphor to Esperanto.


> I still fail to see why we should not use Javascript for everything.

From time to time, I found the idea in Gary Bernhardt's talk, The Birth and Death of JavaScript /pronounced as yah-wa-skript/, despite being very hilarious per se, was also very insightful.


This article should be required reading for every web developer / web development course etc.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: