Hacker News new | past | comments | ask | show | jobs | submit login

As someone who has gone great lengths to improve this by creating a smaller bootstrap [1], I have to say that in the end of the day it will probably not matter. CSS is hardly the bottleneck and the same time spent optimizing it would probably be better spent:

- Optimizing your images.

- Concatenating+minimizing+gzip the JS and CSS.

- Optimizing Javascript.

Likely in order; mileage might vary. If those 3 are already in place, then it might make some difference to optimize the CSS.

First meassure what you actually want to improve (it is not page size per se, it is loading time). Then make sure that you optimize it and not something else.

[1] https://picnicss.com/




Agreed. Css optimization is a drop in the ocean compared to the shitload of 3rd party scripts the marketing team will load through gtm anyway. Readability is the key. Performance improvement is a red herring.


Reducing the overall size of CSS by cutting repetition isn't even that relevant given how well gzip handles that repetition. Unless your CSS is freakishly huge, selector count is not even likely to have much perceptible impact.

I'm completely with you on the 3rd party stuff too. If I run ublock on the relatively JS heavy React site I work on, load times creep in at sub-second, I reckon I can get that down to half a second by hammering the code and applying some smoke & mirrors, but there are obviously diminishing returns on client side optimisation.

Disable ublock and that loadtime triples - likely due in no small part to the three (3!!!) different versions of jQuery the adverts & misc 3rd party scripts are currently loading.

I can't help but wonder if the secret to fixing the internet for the less tech savvy general public isn't messing about with stuff like AWP, offline first, whatever, but to force code going through DFP and GTM to respect the browser and the host site a little more.


Oh yeah, I also wrote about how GZIP handles repetition: https://medium.com/@fpresencia/understanding-gzip-size-836c7...


That's a pretty succinct summary of the gzip thing.

There is still worth mentioning the issue of executing code once it's decompressed - so properly scoping a var is always better value for money than global.ancestor.foo.bar etc.


Could you not just run uncss [0] on the framework and your own CSS? Or does it get tedious to exlcude classes that are used by js? Only ever used it (and other optimizations) for static sites in order to get 100 points on pagespeed.

[0] https://github.com/giakki/uncss


It might work for static sites, but as soon as you start to get fancy and use JS+CSS correctly [tm] it's more difficult because, as you say, there will be state classes modified with JS which changes the style but wasn't previously displayed.

That said, my point still remains the same time invested in learning+using+tuning uncss would probably be better spent on any of my points or some others suggested here. AFTER that, sure! I got the whole page for the library I linked down to 10kb in TOTAL including images [in svg] for a competition. It has gained a bit since then though, mainly with analytics.


In terms of performance, you’re probably right.

In terms of developer productivity, keeping stylesheets uncluttered and easier to maintain is a good idea anyway.


If you are working on a big project then it is important to keep your CSS maintainable.

Quoting the article:

"CSS with this much repetition is also expensive: It’s expensive to create, and it’s particularly expensive to maintain (not even variables will save those who, on average, repeat each declaration nine times, like the Engadget and TED teams)."


This. CSS isn't usually what slows your page down, unless your framework is absolutely gigantic to a ridiculous degree.

In fact, I'd go further and say in a lot of cases it's not even something you're hosting.

It's what third party resources you're relying on. Got ads? That will be the biggest bottleneck here, since ad networks load hundreds of KBs worth of scripts and images that most people block or ignore anyway.

Same with third party social networks and media sites. That YouTube video or embedded tweet is probably using a lot more of your user's bandwidth than anything else.

Not sure what the solution would be there, especially if you're reliant on the ad money or need to use videos or third party media to illustrate your article.


Would throw in there "Use CDNs as much as possible"


Nah. With http2 you're seriously better off having everything in one place. Avoids overhead of multiple TCP connections and (more importantly) overhead of TLS handshakes.

Might not always be true but in tests I've done on my company's sites the http2 load is always faster with no CDN, even from far away locations.


With plain CDNs, sure. With reverse-proxy caches like Cloudflare, the CDN can be the HTTP2 endpoint.


I would bet it still adds latency unless your site is completely static. For anything that needs to hit your servers you->cloudflare->server is still slower than you->server. On the round trip that's added two extra hops so maybe 100ms.

The golden age of CDN's is over. Their primary advantage was being able to open up many parallel connections using multiple subdomains and the ability to cache across domains.

With http2 multiple connections have become a bad thing and https is mandatory so CDN caching is irrelevant.

On http2 and a tiny AWS instance I can get sub 200ms page loads from anywhere in the US if I host in the midwest. The gain from avoiding multiple HTTP connections, TCP slow starts, and TLS negotiations negates all the speed advantages of using a CDN. Cloudflare might be worthwhile if you get massive traffic but a small Nginx instance can handle hundreds of megabytes per second of traffic.


I just don't feel comfortable using them in certain situations, like SaaS web apps. What if they go down, for any reason, and since I depended on the CDN for a core framework (say, React), now my entire site isn't going to load. Queue customer e-mails.

That being said, for less mission-critical projects, they do have their place.


CDNs aren't an either-or thing. It's common to fall back to a copy on your server if the CDN version doesn't load.

CDNs involve a big set of tradeoff choices you have to make, they don't make sense for every circumstance. I don't think I'd ever throw in "Use CDNs as much as possible" into an optimization advice list.


Yeah, keep the dynamic stuff on your webserver, static stuff on the CDN. Part of the reason I say "as much as possible" is because I've seen significant conversion improvements by enabling a CDN.


Typically the CDN is used for all static assets does not include the HTML. In the event that your CDN is not working, you can always fallback on your own static.example.server.

CDNs are one of those cases where the cloud vastly outperforms whatever kind of server you have.


I would add to this, use a dumb but performant CDN. Cloudflare's Rocket Loader has rewritten working Javascript into broken Javascript for me. If I use Cloudflare I turn off all the gizmos to be sure.


Ah sure, but that'll depend a lot more on the type of website. For small websites I normally include the main library through a CDN and just include it into the <style> tags if the total HTML+CSS size is under 14kb. For medium-large websites totally agree.


this. optimizing images using webp seem to be pretty cheap way to save lots of lots of bytes. i wish apply had support for it, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: