Hacker News new | past | comments | ask | show | jobs | submit login

Just use pure HTML/CSS and the web sites will fly by comparison.



For a single page load, sure. For subsequent page loads you're loading a lot more than necessary (a JS app can fetch just the content that's changed, and without a blank page in between), so a pure HTML and CSS solution is a great deal slower.

Plus, if you users have unreliable internet connections then a JS app can use a service worker to cache the entire app to work offline, and only load in new content when possible. An HTML page doesn't work at all in those circumstances.

Sometimes JS does actually make a site better. It's not always unnecessary bloat.


There is a thing in HTML5 called Subresource Integrity (https://developer.mozilla.org/en-US/docs/Web/Security/Subres...).

It looks like this:

  <script src="https://example.com/example-framework.js"
  integrity="sha384-oqVuAfXRKap...."
  crossorigin="anonymous"></script>
I wonder if browsers could keep a cache with those hashes as keys and whenever the integrity hash has a match, then it can take the JS from the cache. That would save huge amounts of bandwidth and pages would be so much faster to load.

Probably right now we're fetching the same version of jquery hundred of times from 20 different domains a day.


Currently, SRI is not enough for browsers to implement content-addressable storage as you describe here, because it is subject to cache poisoning. See https://news.ycombinator.com/item?id=10311020 - basically, the browser can't know if a script can actually be loaded from the claimed domain without requesting it. This can be used to violate CSP.

Though it would be nice for the browser to cache it for domains that have delivered the script previously. It wouldn't be that different from a normal cache except the timestamp doesn't matter.


Thank you for the info! I imagined that there must be some technical issue as having hashes for content would make caching so easy. Anyway, at least using that hash for the same domain would save some requests, as browsers do requests to the server to see if the file hash matches in order to prevent sending the whole thing again.


It's not really bandwidth that causes the issue. Javascript is just really slow, both in parsing and execution. According to Chrome dev tools, parsing jquery takes 20ms on my 4.4 GHz desktop CPU. Now imagine how long that takes on a mid-range smartphone. Then add in a dozen other javascript libraries and shims and polyfills and the site is barely usable.


jQuery is a large library. If it were modular and developers used only the code they need it would be faster to parse.

But I doubt the bottleneck is JS code. The problem is that web sites are not optimized (some frontend developers think that writing a CSS stylesheet for narrow screen is enough) and they include a lot of resources (including trackers, advertisement, spying social network buttons I never click). Some of the widgets create an iframe (which is like a separate tab in your browser) and load a separate copy of jQuery there, make AJAX requests etc. And even worse, some advertisement networks can create nested iframes 2 or 3 levels deep (for example when a network doesn't had own ads, they can put Google Adwords block). So when you load a page with 10 iframes it loads the CPU as 10 separate tabs.

Decoding images is not free too, especially if it is thousand pixel wide heavily compressed JPEG or PNG image.

The real optimization would be cutting away (or making lazily loadable by user request) everything except content. As website developers are not going to do it, it is better to do the optimization on client side. I wish standard mobile browser allowed disabling JS, web fonts (which are just a waste of bandwidth) and loading images on request. Mobile networks usually have high latency so reducing the number of requests needed to display a page could help a lot.


You can download a custom bundle of jquery from the source. But then it won't be shared with other websites. The solution is for browsers to cache common libraries and not try to load these resources per page.


> There is a thing in HTML5 called Subresource Integrity

It piqued my interest, but I was disappointed to discover that it's only supported by Gecko & Blink[1] - not supported by Safari or IE/Edge. Javascript is currently unavoidable for offline apps.

1. http://caniuse.com/#search=integrity


It’s a progressive enhancement. Browsers that don’t understand the integrity attribute will just load the JS regardless, but at least Firefox and Chrome will get a safer experience.


JS + offline apps? thanks for making me feel old.


There are ways to cache responses defined in HTTP standard since first version. No fancy HTML5 features is required for that.

(And if you meant using hashes to use cache for resources from different domains - there probably will be many misses because every website can use different library versions, they can compress or bundle libraries etc).


At least on HTTP/1 sites, most people are bundling their libraries together, so subresource integrity can't save you there.

I want to say there's a security concern with re-using these libraries, but I guess the possibility of a hash collision would be extremely small.


They won't unfortunately implement that caching scheme, because that leaks the sites you have visited to attackers.


> if you users have unreliable internet connections then a JS app can use a service worker to cache the entire app to work offline,

It looks like an over engineered system, and you have to preload the content while on WiFi. And by the way do you know a reliable way to detect whether device is really online or there is a link but no packets are going through?

And every website is supposed to write its own code for service worker.

I think it would be easier to implement a feature in a browser where user can explicitly save some pages for offline reading. Or allow user to view pages from cache.

> Sometimes JS does actually make a site better.

For most sites it just adds unnesessary widgets (like spying share buttons) and advertisements. Especially on newspapers' sites - most of them work better without JS.


All that stuff sounds really great if you had a lot of engineering resources and you are writing Gmail, but for 99% of the sites out there the JS hacks that load just the deltas and whatnot just get confused by packet loss and I end up having to reload the entire page including the gigantic JS hairball, or even worse the thing is so confused that I have to clear my cache and cookies to make it ever work again.

Simplicity has so many things in its favor.


Actually for the first part, there is this protocol called SDCH (https://en.wikipedia.org/wiki/SDCH) that allows a site owner to define a site-global compression dictionary, and each resource then becomes a compressed resource with the dictionary being the former one. It's hard to deploy, but it works: LinkedIn saw an average of 24% additional compression.

For the second part I wonder if browsers could display stale data with some warning saying so; that would solve many problems that happen all the time (refreshing a page after the website came down, ...)


Given my web development scars, I have become an advocate that for anything other than dynamic documents, the way to go is native.


pjax (whether using the old familiar jquery-pjax or some more up-to-date implementation) is great for decorating simple HTML pages with, to replace full page loads with just a main-content load. And since the fall-back is just a full page load, it degrades really gracefully.


You're lucky if you've got a good enough site so that you can get people to stick around for the second page load! Subsequent CSS calls will be cached for the majority.


Client-side XSLT and HTTP caching address both those issues. Yes, JS is another way to solve those issues, but not the only one.


HTML/CSS are slow. The true way is ".txt"

Edit: I mean this as a sort of lazy way of making a reductio. I don't think .txt is better than HTML/CSS for pages (and I hope that's obvious). I also don't think having no JS is a good idea.

I believe in progressive enhancement, and to a first approximation, I think that all websites have at least one feature that they could implement in JS that would be "a good thing".


I agree 90% with this; .txt files get most of the job done! ;-) The other 10% where i don't fully agree comprises of hyperlinks; i need me some clickable hyperlinks. :-)


Browser option to make hyperlinks clickable when rendering text files? Or client-side browser rendering of text-based markdown.


Having browsers intelligently render `text/markdown` sounds like a great idea. And while we're waiting on the browser implementation, maybe we can find some sort of temporary workaround to send a markdown parser to the client?


Maybe we could just send the raw markdown anyway - it might not be pretty on all clients, but it should be _legible_ on all clients.

Or maybe we could send markdown if the user agent included text/markdown in the request's Accept header, and pipe it through a markdown->HTML filter otherwise.

I would love to see some kind of native markdown support on the web.


A web search turned up this Firefox plugin, https://addons.mozilla.org/en-US/firefox/addon/markdown-view..., are there other good ones?


As long as I can make text blink, I'm happy.


No, hyperlinks failed us. It's all about single-page apps. Put all of your content and all of the content you would have linked to in your .txt. We need someone to build Reactxt.


Whoa whoa whoa, what's some fat cat solution right there! Just use a http header for your content and you're set.


The url should contain all the information so we don't have to make a HTTP request.


I think most news sites implemented this years ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: