Hacker News new | past | comments | ask | show | jobs | submit login
I am a fast webpage (varvy.com)
621 points by capocannoniere on Sept 7, 2016 | hide | past | favorite | 281 comments



I hate to be negative, but what really is the point of this? That a simple webpage without any content can be fast? Of course it can.

Is it desirable to inline your CSS, "like a boss?" Maybe if you have one single web page. What if you have dynamic content and your users intend to browse more than one page? With externalized CSS, that is all cached.

Same with images. If I'm building a web application, I certainly do not want inlined images. I want those on a CDN, cached, and I want the page to load before the images.

Not only is this not particularly useful advice, it's bad advice.


This guy's advice is exactly what Google advises you to do, and exactly what Google does.

You say this website is only fast because it's "without any content". If there's no content then tell me how it communicated its point so clearly. If it's inherently fast then tell me why the same thing posted to Medium is so slow.

A hallmark of a great solution is that people who see it decide the problem must not have been very hard.

http://www.csd.uwo.ca/~magi/personal/humour/Computer_Audienc...


Google doesn't do this on their more full-featured apps like G-Mail or Google Docs. Docs ships a bunch of files and stylesheets.

Search works well because they have relatively few features to support by default (for things like the calculator I bet they ship that in with the response).


I think it's implied that these advices are for content pages, for the "old web", and not for single page apps.


HN is neither of the two and IMO represents where people spend most of their time.

I think Google's CSS embedding is terrible advice for the meaningful web, but logical advice for adwords landing pages or sites with content so bad or sparse you wont be navigating them.


HN has a purposely minimalist stylesheet / layout, with NO icons (except the one Y on the upper left corner), NO images or other media, NO fanciful animations.

Not all websites can do without all of that - imagine a photography site, or an e-commerce site, etc. without pictures?

I agree though that this is great advice for landing pages; load times are probably among the reasons of most bounces.


>Not all websites can do without all of that - imagine a photography site, or an e-commerce site, etc. without pictures?

No, but I can imagine an photography site, or an e-commerce site without useless non-content pictures and shit-loads of CSS and JS.


Yes, and HN is in violation of Google's performance guidelines for putting those sensible rules in a sensible place.

> Not all websites can do without all of that - imagine a photography site, or an e-commerce site, etc. without pictures?

How does HN's CSS enforce a ban on img elements in pages pointing to a photo's canonical location? Or preclude putting your standard frames into it?

Imagine a photography site where no photo link is shared across any pages (but 90 page base64 encoded URLs are repeated randomly), an ecommerce site where a product is shown in a strange new light at every step in the checkout process using a mishmash of entirely different CSS.. Google's advice is approving the most idiotic behavior on sites that are barely keeping their head above water in terms of technical understanding, letting them hold onto strange ideas because they are "fast."


> I think Google's CSS embedding is terrible advice for the meaningful web

HTTP2 allows CSS to specified in the HTTP header. However Google AMP doesn't support it yet.


Please don't consider this as an attack but, I tend to believe that Google is not in the possession of the absolute truth. They even usually contradict themselves.


With the affiliate link to the VPS host, it's essentially just a marketing page.


> I hate to be negative, but what really is the point of this?

To shame people who build slow, bloated websites.

Saying that no bloat equates to no content is part of the problem.


This kind of attitude is the problem actually..

There is no silver bullet solution to all problems when building webpages, and if you think this guy's advice is catch-all for everyone and that "shaming" people who don't follow those rules is good, then you're a bad developer.

Someone else said it elsewhere in this thread, but different contexts need different solutions based on determined use cases and needs.


There is no silver bullet, no technology that is good everywhere, but there are things that are bad everywhere, like unnecessary reliance on JavaScript, code and resource bloat and so on.

Being diplomatic about it and not assuming everyone who deployed a Joomla installation with a ton of bad plugins because it HAS TO WORK NOW is an idiot is OK. But wrapping bad engineering practices in "different contexts need different solutions based on determined use cases and needs" is a very different story.

Obviously, different contexts and use cases need different solutions. That's no excuse to pick the bad ones, nor to pretend they're not really, really bad.


It really depends of your website usage. I am maintaining a free database of chemical properties. Usually people are landing there from Google and looking at a single page[0] before leaving.

To have this single page experience as good as possible, I pack the CSS directly in the HTML. First I packed also the image of the molecule, but it was not good for SEO, because I have quite some visitors coming from search of the drawing of molecules. Google would be smart to index the inlined images, I would switch back to use them.

All these optimisation techniques are only good if the context is the right context. That is, you need to pay attention to the assumptions used to make these rules.

[0]: https://www.chemeo.com/cid/45-400-7/Octane


I have come across this guy's site before and he does offer some useful tips. Not anything you won't find anywhere else but in a clear and concise manner that almost anyone can understand.

There are lots of people self-managing small sites that don't have a clue about any of this stuff - it's a decent resource for such people - nothing more.


> what really is the point of this?

An affiliate link in the last paragraph.


We know that if speed is your goal there are tradeoffs you will have to deal with, such as inline images and css that are harder to maintain. This is a well known architectural tradeoff (along with usability vs security). If you are willing to pay for your load speed in the form of increased page complexity and reduced scalability, this is a legitimate option.


>I hate to be negative, but what really is the point of this? That a simple webpage without any content can be fast? Of course it can.

No, that most "content" making webpages slow is useless BS bloat.


I suppose if we are in control of the backend database that our content is delivered from, we can build a page with inlined images and CSS when the content that drives the page is changed, so it is possible to build quick pages.

The main driving point is probably the fact that there is sometimes no need to build pages that rely on all manner of JavaScript etc.


After some benchmarking, we did something of this sort on a media site. A middle-fold path works best for us. I'm yet to see any media site do this (yes, we are content-heavy, but we also load fast). Admittedly, the only thing slowing down the page load is the ads, but we don't bombard those either. Link: http://www.sportskeeda.com/?ref=san


It also seems somewhat antiquated. As service-worker comes online we can... do a lot more with a more naive <link> method.


It is relevant for resource limited IoT devices. It can be a struggle to get embedded web sites running fast using modern frameworks. Throw in TLS with large keys and you enter a whole new world of slow where it pays to shave off every byte possible.


Not to mention the improvements in http2.


To be a superior, fast website. Your long comments will only slow it dowwwwnnnn.


the guy says he's not an idiot then brags about spending $30 per month on a VPS (idiot), for a single-page static HTML website with all inline-code (idiot+1).

It's not being negative to point out the glaring flaws in a person's statements. My assumption is the entire thing is an advertisement for that hosting service.


Also, whether the VPS has an SSD or not is totally irrelevant— if you really were serving a single page it would be cached in the memory of your webserver.

(Or better yet, serve the thing off S3 and let Amazon be your CDN.)


I'd use github pages.


Hell, I'd just use my Dropbox account.


Dropbox is disabling html hosting on October 3rd


Ah, shame. I imagine it's for security reasons, though, so I can't blame them.


Exactly, it is not a good argument. I think they may not know better yet. We all start somewhere.


I'm not saying a VPS is not appropriate for a static HTML web-page, but there are perfectly capable VPSes avaible for $3.50 to $5.

I'm not in agreement with many of the commenters regarding CDNs. I don't believe in a free lunch. Free software is one thing, but CDNs require infrastructure, which incur costs. Somewhere, the people offering those services expect to make those costs back. You'll either pay for it directly, or you'll pay for leeching off someone else's bill in karma. For a very tiny, low-traffic, low-bandwidth website, I think not using a CDN is perfectly reasonable.


Economies of scale...


Get a decent shared host.

Just because Godaddy sucks doesn't mean that all shared hosting sucks. NearlyFreeSpeech.net is fine for most people, or Amazon S3 if you're into AWS / webgui stuff.

If you're actually using a decent amount of bandwidth (ie: image hosting), something like Hawkhost.com would be good. Just gotta stay away from EIG (https://en.wikipedia.org/wiki/Endurance_International_Group), which is a conglomerate that's buying up all the shared hosts and making them crappy.

Bam. Now you have economies of scale AND far cheaper hosting than any VPS.


Keeping in mind that you can get a VPS for as low as $3.50 per month, I don't see that there is ever a reason to use shared hosting, and several reasons not to, performance is only one of them.

If you're website is that unimportant, then you could probably get by with any free hosting services where your page would be yourpage.serviceprovider.tld


Where does it say that the VPS is used only for that one page?


It doesn't. It isn't. It appears that there are other projects hosted on the same site. Btw, the pages load quickly.


the guy says he's not an idiot then brags about spending $30 per month on a VPS (idiot),

You can always go with cheap, fully virtualized GNU/Linux server, or you can go with a virtual, true UNIX server running at the speed of bare metal[1].

Your choice, but quality, correctness of operation, data integrity and performance still cost something. It you don't care about any of those things, fork out $5 for the alternative and call it a day.

[1] http://www.joyent.com/


Not disrespecting your opinion here, but $5 Digital Ocean droplets[1] have been working quite well for me as well as for nearly two dozens of my clients, spanning across the last several years (taking into account all the four important parameters you've specified: quality, correctness of operation, data integrity and performance).

My (limited) experience with Vultr[2] has also been fairly satisfactory.

[1] https://www.digitalocean.com/ [2] https://www.vultr.com/pricing/


As far as I am aware, Digital Ocean is running on Linux, and because Linux requires turning off memory overcommit and integrating ZFS on Linux, at a minimum, there is no correctness of operation. As there is no fault management architecture (like fmadm), and support for debugging the kernel and binaries is incomplete (no mdb, no kernel debug mode, no kdb, incomplete DWARF support), there really can be no assertion about correctness of operation.

Correctness of operation does not only refer to end-to-end data integrity, but also to adequate capability to diagnose and inspect system state, in addition to being able to deliver correct output in face of severe software or hardware failures. Linux is out as far as all of those.

In other words, if you want JustWorks(SM), rock solid substrate for (web)applications, anybody not running on FreeBSD, OpenBSD, or some illumos derivative like SmartOS is out, at least for me. Perhaps your deployment is different, but I don't want to have to wake up in the middle of the night. I want for the OS to continue working correctly even if hardware underneath is busted, so I can deal with it on my own schedule, and not that of managers'.


As a reality check, we're talking simple (and even not-so-simple) web site/app hosting options here (and not some NASA/space/military/healthcare grade requirements).

From my perspective (as a freelance web tech/dev professional who routinely manages close to two dozen hosting accounts for clients), what you're saying above comes very very close to driving a nail with a sledgehammer.


That would only hold true if your or my time were worthless, or had a very low valuation.

Hosting clientele is notoriously high maintenance; the more technology ignorant, the higher the maintenance in terms of support, and the more fallout one has to deal with when there is downtime.

My time is expensive. My free time is exorbitantly expensive. Therefore, when I pick a solution and decide to deploy on it, it has to be as much "fire-and-forget" as is possible. Picking a bulletproof substrate to offer my services on also increases the time available to provide higher quality service to my clients: since my time dealing with basic infrastructure is reduced as much as possible, I have more of it to spend on providing better service and adding value, thereby increasing the client retention rate. Because of the economy involved in this, and especially considering how razor thin hosting margins are, I feel that the nail with a sledgehammer metaphor is inapplicable to this scenario.


It's also a security issue and you should set up CSP to prohibit online CSS. See a random Google search result http://dontkry.com/posts/code/disable-inline-styles.html or like I did read "The Tangled Web"

Edit: that's obviously a non-issue in this particular case since everything is static. But as a best practice this needs to be considered and inline CSS doesn't make you a boss.


You are completely wrong. Inline styles on a static page is not a security issue. If you read that blog post you linked to, you would know this.

CSP will do absolutely nothing for a single static html page containing all assets it needs as inline.


Yes, of course that's correct. And I wouldn't have felt that pointing out that it can be is necessary if he hadn't written that he is inlining it "like a boss". That makes it sound like it's some awesome best practices we all should be adhering to.


So if I'm reading that page correctly, the basic gist of the claim is:

If we only do a half assed job of sanitising user input by attempting to blacklist whatever javascript we can think of, we'll still be open to XSS attacks from people smarter than us who put css into our user supplied data, so the answer is to prohibit inline CSS - not to properly sanitise user supplied data.

I think there are better pieces of security advice around than that...


Why not do both and be safe if your sensitization has a bug?


I still don't understand what it accomplishes. Why does inline matter?

The vulnerability means they can inject arbitrary markup including <link> or <script> that load offsite sources.

You can use CSP to whitelist allowed offsite domains. But if you're not careful, "you never know" and "you might as well" are more likely to waste your time chasing low value things.

For instance, inline CSS is valuable as an intermittent developer convenience, and disabling it takes that away while protecting your from an unlikely event.

Also, you generally should be escaping-by-default and not sanitizing. A templating system should escape by default and make it obvious when you opt out.


I still don't understand what it accomplishes. Why does inline matter?

In the worst case scenario, where the server doesn't have all the blocks of the file cached in random acces memory, the server can fetch the single, inlined file with fewer input operations per second from local storage far faster than one's web browser can fetch multiple files over the network. This means that latency is lowered, and thus delivery time is accelerated.


Right, I meant in regards to this comment chain where "inline CSS is bad if you have an XSS vulnerability."


so, hoster doesn't escape anything the users provide, and is more concerned about CSS instead of JS?

And then their site broke as soon as I clicked a link


Just to point out, there's no particular reason to host a page like this on a VPS at all. You could just throw it on S3. Even better, you could put it behind a CDN like Cloudfront and the total cost would be a dollar or two a month, not $25+ and it would be significantly faster.


> You could just throw it on S3. Even better, you could put it behind a CDN like Cloudfront and the total cost would be a dollar or two a month, not $25+ and it would be significantly faster.

I apologize for quibbling (really, I do! but I'm an infrastructure guy! This is my bag!). Yes, host it on S3, but ALWAYS put a CDN in front of S3 with long cache times (even just Cloudfront works). S3 can sporadically take hundreds of milliseconds to complete a request, and you know, AWS bandwidth is expensive (and CDN invalidation is damn near free). And you can use your own SSL cert at the CDN usually instead of relying on AWS' "s3.amazonaws.com" SSL cert (although you will still rely on the that S3 SSL cert for CDN->S3 Origin connections; C'est la vie).

EDIT: It also appears Cloudfront supports HTTP/2 as of today. Hurray!

https://aws.amazon.com/about-aws/whats-new/2016/09/amazon-cl...


You could also host it on App Engine for free and automatically use Google's CDN.


For free?

GAE doesn't charge for their object store nor their CDN service?


There is a free tier. It looks like it's free up to 1GB per day [1].

On my home account I've only used it for demo projects that are scarcely used, but it's fine for that.

[1] https://cloud.google.com/appengine/docs/quotas


Does it 'stop' if you reach the quota or do you get a bill?


If you don't put in your credit card info, it stops. Otherwise you get a bill.


I don't have any personal experience, but as I understand it, that depends on whether you enabled billing.


Don't you have to provide a CC to sign up for even the free tier? (That's how it was when I was trying it out a couple of years back.) It was really cute too, Google would send me a $0.00 invoice each month.


I don't remember. I've been using it since the beginning and I don't get a bill.

I don't know how up to date this page is, but here's how it used to work: https://sites.google.com/site/gdevelopercodelabs/app-engine/...


IIRC, the CDN isn't charged separately, and the object store is available (within a certain quota) within the GAE free tier, so, within certain limits, yes, "for free".


Cloudfront requires JavaScript, which isn't always acceptable.

Edit: Nevermind, confused Cloudfront with Cloudflare. Thanks for the correction, toomuchtodo.


Cloudflare does, Cloudfront (AWS' CDN) does not.

EDIT: cm3: I didn't mean to call you out, just wanted my reply in here for historical context. Its very easy to confuse the two.


No, thanks for correcting me. The names are similar enough and the purpose is too, so it's easy to confuse. I mean, if you hadn't commented, I would have wondered why it got downvoted.


This is untrue for CloudFlare as well. Individual sites may choose to require JavaScript but you can use the service just fine without it.


How long S3 takes to fulfil a request does not affect bandwidth.

I personally went the other way. I still use CloudFront as a CDN but made it cache items for short periods of time. Invalidation was too much of a hassle, and it took too long. Admittedly, I should use hashes or something of the sort to keep my items versioned, but laziness always gets in the way.


> How long S3 takes to fulfil a request does not affect bandwidth.

Correct. Did I insinuate that? I apologize if I did. They are two distinct issues, both of which a CDN prevents.

1. S3 outbound bandwidth is expensive. Use it only as an object store of last resort. Your CDN bandwidth is orders of magnitude cheaper (don't believe me, go compare the pricing).

2. S3 response times can vary wildly at times. Use a CDN to avoid this.

And of course feel free to use a cache key instead of invalidating via an API if ~15 minutes it too long to wait for fresh content to appear at edges.

PS Don't apologize for laziness. When directed appropriately, its a most productive force.


My mistake! I assumed you meant the time it takes to fulfil a request and the bandwidth cost were related because they were mentioned in the same sentence.

Agreed on all other counts.


I don't mind the quibbling, as I've been trying to figure this out myself.

From what I see, S3 is $.09/GB to the Internet, and Cloudfront is $.085/GB to NA/Europe...and $.14/GB for Asia. How is this cheaper?


Cloudfront is slightly cheaper than S3, you'd want to look at non-AWS CDNs to get near 1-5 cents/GB for outbound (Cloudflare, Cachefly, etc).


Yeah I notice that was weird too. The creator speaks a lot about about html optimizations but one of the most widely-used methods of page speed increases are global CDN distribution.

The SSD is really meaningless in this context. The website is so small that it will be loaded almost 100% from the filesystem cache. As long as it has more than 512 MB of ram...

If I wanted my website to load incredibly fast, I would absolutely not put it on an obscure VPS. Not that there's anything inherently wrong with it, but it's generally not going to make your site faster.


the whole thing was obv. created with the intent to promote his affiliate link


Meaningless-weaningless, but that site loaded instantly on my iphone 4-without-s and did not lag, unlike all others (except hn ofc). No CDN can hide modern js freezotrons.


Regardless of where on this globe you put your VPS, someone will be accessing it at 1000 ms latency. It doesn't make sense to optimize the browser page load speed to 10 ms, and forget that it takes 600 ms to fetch the data from Asia.


That's because all those other sites are poorly built. It's not because the article's site is a brilliant example of "doing it right".

Putting bare text on the web is always going to be fast. So what. If he presented a real full-featured website with the bells and whistles that people expect today, and made it operate that fast, he'd have something to show. Instead he presents polished garbage.


Wait - what precisely do users demand from your website today? Usually I'm happy to find a website which loads quickly, is clean, and steers me in the direction of whatever I'm trying to find, personally.


My expertise is not marketing so I don't feel I could adequately answer that question but there are plenty of focus group studies which show what sort of UX works best. It's a safe bet that most of the big corporations who are already focus-grouping everything they publish, such as Disney for example, are also using focus groups to design their websites.


They're using split testing, conversion rate optimisation, and bizdev to design their sites. When something appears on a corporate website, it's there to benefit someone in the corporation, not the users (although it might benefit them as a side effect).


His site is at least full-featured article (you can load, scroll and read it, yay). Most sites I open are article sites, and they are rarely full-featured articles, because load/scroll features aren't easily accessible.


How do you prevent S3 from slashdotting your wallet if your site suddenly gets really popular?


You don't really.

I guess everyone's needs are different, but for me, hardly anyone reads anything I write. If I have a sudden surge in interest in something I wrote, last thing I want is to cut off access to it. Would rather keep paying the infinitesimal amount per page view to keep people reading it.


You can setup jobs to run that fire when CloudWatch alerts fire noting that your bill is going up or that your hit rate for certain objects is going way up. I think there's a way to setup billing such that you can't exceed a certain amount in a month but that's a weird situation to try to hard stop charges without deleting everything in your account.


You can use a CDN like CoralCDN.


The clever internet marketer who set up the page missed a trick!

Instead of the affiliate link to some host no on has heard of,he should have affiliated linked to AWS (if possible) and a CDN. Then he could have added that as a strong feature that helps makes the page so fast?


Has any one dealt with DDoS attack on a static hosting (S3 + Couldfront) set up?

I sometimes fear that if something like this happens the bandwidth bill will be too much to handle for small personal projects. Also it's a pain that AWS doesn't allow one to set hard limits on cloud spending. Yes, they allow to set up some billing alarms, but no hard limits. No guarantee that no matter what, the month's hosting bill will not exceed $10 for this project.

For small personal projects a tiny VPS seems to be safer from this angle. At max a DDoS will cripple the VPS but the hosting bill will stay the same.

If you have been through this, did you get any discounts from AWS for resources being used during DDoS attack or you had to pay the full amount.


GitHub Pages is a 100% free alternative


Or you can use a free hosting: Firebase, Github Pages, Dropbox.


Or even better, Surge.sh.

- A very happy customer.


I love Surge but they host your site on Digital Ocean... an SSD VPS.


Been using Surge.sh recently and it's super awesome, but I'm not sure I'd use it for production. Care to elaborate?


> "I am not on a shared host, I am hosted on a VPS"

Hate to break it to you, but your virtual private server (VPS) is likely sharing a bare-metal server with other VPS. ;-)

Also, you can look into content delivery networks (aka CDN), which will most likely deliver this page faster to clients than your VPS especially when you consider your VPS is in Dallas and CDN's have nodes located around the world.


> Hate to break it to you, but your virtual private server (VPS) is likely sharing a bare-metal server with other VPS. ;-)

Likely? Isn't that the point of a VPS?


You can reserve entire machines, even using VPS. Heck, you can even luck out and be the first VPS on a newly provision host.

Chances of either, slim. Still I try not to assume when I don't have the data.


Sure you might but be sharing, but the point is that you don't generally have to care.

It's all virtualized and cloudy.


VPS performances wildly vary from provider to provider though.


Yeah.

But its also the point of shared hosting, which the site hates on.

A good shared host (ex: HawkHost's semi-dedicated) will run circles around a Lowendbox VPS.


I think he is contrasting VPS service to shared hosting services like Go Daddy or one of the many cPanel providers. Unless a VPS is using OpenVZ, the box isn't over provisioned. A cPanel host is usually very over provisioned. 500 customers per server wss not uncommon 10 years ago.


With a Linux host, you can oversubscribe KVM guests, and I'm starting to see it more frequently. Still haven't seen as bad as shared hosts, however.

Kernel Same-Page Merging allows you to de-dup common pages across virtual hosts (such as kernel) for example. http://www.linux-kvm.org/page/KSM


>Hate to break it to you, but

I think the pedantry is unnecessary here. "Shared hosting" colloquially refers to multiple websites sharing a single web server, database, and PHP process. Everything is set up for you by the provider, you simply supply the files. What "shared hosting" does NOT usually refer to are containers, virtual machines, bare-metal, IaaS deployment environments, or anything like that.


Absolutely. Yeah that is kind of silly.


Not that wickedly fast unless you're really near Dallas where the server is:

https://performance.sucuri.net/domain/varvy.com

Hosting on a single VPS is never gonna be very fast globally no matter what you pay your hosting. In fact our free plan on netlify would make this a whole lot faster...


It's still pretty fast all over the world, because that total time is all you need. For most sites, that three seconds is just the start, ensued by several more seconds of downloading CSS, JavaScript, images, analytics, widgets, and whatnot.


Off topic, but is that service showing crazy slow numbers for "USA, Atlanta" for anyone else?


It is. Linode's Atlanta data center has been getting DDoS'd on and off since Sunday. This site isn't hosted on Linode, but could there perhaps be congestion in Atlanta from that attack causing general slowness?


Same thing here. Under half a second from every other location, 2.5 seconds from Atlanta.


yup Atlanta is slow for me too


oh my god think about all those users from Japan they are bouncing!

stop the presses for the entire company to have a meeting on how to shave off 300 milliseconds for the poor residents of Japan!


It's fast in China, which is saying something.


Still wickedly fast in Australia, that's about as far away as you can get.


Or a content delivery network containing only SSDs but this guy couldn't afford that.


OP has certainly nailed Hacker News psychology. My old coworker called the technique "inferiority porn." Titles like "the secretly terrible developer" or the closing statement of this particular article: "Go away from me, I am too far beyond your ability to comprehend."

As many people have pointed out there are faster methods of static hosting through a CDN, and many of the techniques of this site are inapplicable for larger sites. But A+ on the marketing.


I hate this trend.

IMHO there is mainly one way to get attention - it's to get (great and instant) emotion from the user. You can give good emotions or bad emotions.

Personally, I think, that to create a good emotion takes much more effort than to create a bad one. The website/product can say how great am I, but it will not 'click' as instantly as someone telling me I am a dumb baby and I suck [0][1] or I am not superior mere mortal baboon [2], for which most people will get instant rage and will start flame wars in whatever comment section, as there "is no such thing as bad PR".

Most popular writer/bloggers in my country have created these dipshit arogant characters (I tend to believe that they are "normal" people, but they clearly know what sells) who always say that they are richer, smarter and better than you. They create stories about "cheap restaurant breakfast for 60€" and so on, though the most interesting thing is that people buy their shit and then rage on whatever websites about how dared the writer call them a dumbass homeless bum.

[0] https://www.youtube.com/watch?v=0nbkaYsR94c

[1] https://news.ycombinator.com/item?id=12448545

[2] https://varvy.com/pagespeed/wicked-fast.html


I probably sound like I'm tooting my own horn but it definitely felt really contrived to me. I upvoted it because of the comments containing better tips or caveats provided for the good ones.


I'd say that the general idea of watching out for external and/or bloated resources is absolutely applicable for larger sites. Media sites are particularly egregious: not only does the js take the lion's share of what's transferred, rendering of the content I'm interested in typically blocks until everything is downloaded and processed.


I make my personal pages fast this way since last century. Probably a huge amount of people did the same. It's pretty obvious.

When you need fancy graphics (a static photo album), things become less easy: you e.g. may want to preload prev / next images in your album to make navigation feel fast.

Things become really tricky when you want interactivity, and in many cases users just expect interactivity from a certain page. But client-side JS is a whole another kettle of fish.

When things become ugly is when you want to extract some money from page's popularity. You need to add trackers for statistics, ad networks' code to display the ads, and complicate the layout to make room for the ads, placing them somehow inobtrusively but prominently. This is going to be slow at worst, resource-hungry at best.

(Corollary from the above: subscription is more battery-friendly than an ad-infested freebie.)


A good sequel to http://motherfuckingwebsite.com/ , which is probably too understyled for most people.



For some reason, I'm a fan of this one: http://codepen.io/dredmorbius/full/KpMqqB/



I have two copies of a book. One is a hard cover, one is a paper back. The paper back is a few centimetres wider than the hard cover, and it makes that copy annoying to read.

There a certain length that a line can be, with being confusing or annoying to read. The reader mode in most browsers understands this, but for some weird reason reader mode isn't available for http://motherfuckingwebsite.com/, at least in Firefox.



Ah, the DJB[0] look. Good times, good times.

[0] Example: https://cr.yp.to/highspeed.html


Indeed. Back to the roots!


it's the perfect style for a wall of text tho :D


Took me almost 30 seconds to load, maybe because the server is being hammered by HN traffic right now? Also like others here were saying, using a CDN would definitely help with the initial latency.


I think this is the ironic lesson: for many sites, optimizing for consistent performance (i.e. CDN, geographic caching) is a more important objective than prematurely optimizing for a subset of users.

Example:

Business A - average render time 0.3s, but under load 5-10s

Business B - average render time 0.8s, but under load 1-2s.

Subjectively, around ~10s response time is the point I would close the tab and look for another business if I was trying to do shopping online, anything involving a credit card etc.


Yep, I thought this was a joke at first because it took 10+ seconds to connect.


Weird, it took me just 194ms.


I guess it depends on traffic and wherever you are. it took me about 4 seconds on first load.


looks like this whole thing is a scheme to promote his webhsting affiliate link: http://www.knownhost.com/affiliate/idevaffiliate.php?id=1136...

The fastest and most reliable hosting is, by far, based on my own experience is amazon's e2 cloud and S3 bucket services.


I picture him coding this in vi with a maniacal evil laugh, thinking of all the money his scheme will make


Nothing wrong with that; Smarts should be rewarded.


Have you tried Joyent or AnyCity?


Is this image inlining thing something new? Am I reading it correctly that the images are encoded in base64 and delivered as html? Surely this is a bad idea... no?


> Is this image inlining thing something new?

No, it's been around since forever. Just not used terribly often.

> Am I reading it correctly that the images are encoded in base64 and delivered as html? Surely this is a bad idea... no?

It depends. Making a new request to fetch the image always has overhead. Whether that overhead is bigger or smaller than the overhead of base64-encoding the image depends on:

• file size (naturally)

• file compressibility: The difference isn't as pronounced after gzipping everything, especially if the source data is somewhat compressible

• protocol: http2 allows a correctly configured server to push attached data with the original request, so no second request is needed. Even without server push, http2's multiplexing will reduce the overhead drastically compared to plain HTTP1.1 or the worst case, HTTPS1.1 to a different domain. The latter requires a full TLS handshake, and that's what, >30kb data exchanged if you have more than one CA certificate in the chain? That's a lot of image data.


You forgot the most important factor: Whether you're reusing that image on a different page. Embedding images in the HTML is basically saving an HTTP request at the expense of not being able to cache the image separately from the HTML.


You can, however, embed images in CSS, which gives you both reduced requests and caching.


Unless you inline the CSS ;-)


This seems like it should work, but have you ever tried it? Or, can you point me to some results of a test to show that it indeed caches the image embedded in the CSS?


If it's inline, then it's cached with whatever it's inlined into.


The problem is that now it's going to be sent with every request. So it'll make the first page faster for the initial request, but slower in the long run.


I'm not sure how you figure that? The CSS file containing inline images would get cached, so nothing would be sent after the initial request.


Exactly! On a typical site, a lot of images are reused across pages, if externalised then it's already cached in your browser.


Completely offtopic, but how do you type •? I like it.


It's system dependent.

On windows you can do alt+numpad 2022

On whatever is handling input for this XFCE system, control+shift+U 2022+Enter types it.


And since I'm lazy, I put it on AltGr+, with xmodmap.


• Enable the compose key on Linux — I set the "Menu" key to be compose — then the sequence is Compose . =

Characters like →, —, €, £, ©, ™, µ, ①, ②, , °, “ and ”, … and ‽ are easily available, as well as most European-ish letter accents: àáâäąȧåảāãæ.

(I live in Denmark, but rarely type Danish. The compose key is more than adequate for typing København, Østerbro and the Æ in my street's name.)

https://en.wikipedia.org/wiki/Compose_key


• is Option-8 on a Mac.


I personally type it with a unicode keyboard input (ex: Japanese in my case) 

ex: "・"


Depends. If you consider that in each request a big chunk of time is spent one opening the connection, and that you can even start opening the connection to download the linked picture after you have received the response from the first hit, then maybe it's not a bad idea. It's one round trip worth of time that you shame off the total loading time.

However, if the image is very large, it will make the initial request large as well. I would only use this for images that are small and above the fold.


No, image inlining is quite old, and base64 is not a very efficient encoding, although gzip might make up for the ~1/3 increase in size.

You'd think HTTP/2 server push would stand for this, but I can imagine inlining is still a bit faster.


The biggest problem with inlining images (imo bigger than the base64 size increase) is that when you change something (like a word in the text of the page) you are forcing your users a full reload of the page (images included). Most of these performance tips assume things won't change.

We know life doesn't work that way.


I think it depends on how much of an image we're talking about.

If it's small, the overhead from base64'ing it (if the page is gzipped) is lower than the overhead of opening a new HTTP connection just to retrieve that one image.



There is an increase in size to base64 encode, in addition to what I assume is the lost capability to cache images, as well as load them intelligently.


Other reasons to embed images using base64 are to have pages work standalone, to reduce complexity (no need to keep pages and associated resources in sync) and increase locality (things are defined where they're used).

These probably aren't particularly important for most sites, but it's something I do on my personal site ( chriswarbo.net ) since I care more about ease of maintenance than load times.


base64 encoding increases the size of the file.

For that image I would prefer to use inline SVG...


Indeed, this particular image is 3.5k of PNG, I'm certain it could easily be well under 1k as SVG.


not to mention retina-smoove


Base64-encoding induces a factor 1.333 size increase of the byte stream, so it's likely not worth it if the site is served on HTTP2. To get exact figures, one would of course have to calculate the size of the additional TCP packet(s) and HTTP headers.


I think it depends on the image size and use case. For small images where the round trip time of an extra request would make a bigger impact than the file size, inlining them might make sense. Especially on mobile where latency tends to be higher.


Keep in mind that you can inline images as SVG as well, which often results in smaller images, included with your HTML or CSS requests.

Not all browsers support SVG, and not all support all properties, but those that do give some pretty good results.


pretty much every standard browser that is in use supports SVG. I've been using it for years.


You can do it with fonts too.


Sounds too. <audio src="data:audio/mp3;base64,..." /> Probably videos as well, but I haven't tried it.


Bad why?


because base64 encoding expands the size of a binary, presumably


Ehh, I just got 10.91s load time in Chrome 53 from Colorado, USA.

Image of Chrome Dev Tools: https://reportcards.scdn3.secure.raxcdn.com/assets/uploads/f...

As an aside, does HTTP/2 provide any benefit for a single HTML file with no external assets?


http/2 implies ALPN (but you can also do ALPN with http 1.x), which many browsers use as a flag to enable TLS Fast Start, and save one round trip.


Nope.

If you're done in one HTTP round-trip, your HPACK state has to push brand-new headers, you benefit nothing from server-side push (if it's even enabled), there's nothing to pipeline, so don't benefit from multiplexing and head-of-line blocking is a non-issue.


HTTP/2 header compression is one benefit that helps even it you have just 1 request.


I want to benchmark this, because intuitively I disagree.

The HPACK spec is a pretty easy read [1]. There is a static, hardcoded table that contains most of the HTTP header names, and even some common predefined KV pairs. You save some bytes on the wire if your header's name or value is one of these entries; the header name will essentially always will be in the static table.

But for names and values that aren't in the static table, you have to put them into the dynamic table and encode them using either the integer packing or the huffman code. The client has to decompress these, of course.

On future requests, you have some leftover state in your dynamic table so future 'duplicate' headers are packed, and take up very little space. But for the first (ever) HTTP request-response pair, you have to trade ALL the headers in "full". So the true benefits of the dynamic table don't kick in.

[1] https://http2.github.io/http2-spec/compression.html#header.e...


nope, it doesn't


> my hard drives are SSD

Of course that's entirely irrelevant as the page completely fits into the ram of the server (or even the CPUs cache for that matter)


Cool... Unfortunately in practice it's easy to find a list of best practices, much harder to implement in a scalable and durable manner on any project of sufficient size, especially if working with a legacy codebase.


> "My images are inlined into the HTML using the base64 image tool, so there is no need for the browser to go looking for some image linked to as an external file."

This does not work in most cases when you use big images. From StackOverflow answer [1]: "It's only useful for very tiny images. Base64 encoded files are larger than the original. The advantage lies in not having to open another connection and make a HTTP request to the server for the image. This benefit is lost very quickly so there's only an advantage for large numbers of very tiny individual images. "

[1] - http://stackoverflow.com/questions/11736159/advantages-and-d...


  > This benefit is lost very quickly so there's only an advantage
  > for large numbers of very tiny individual images.
In which case maybe it would be better to use sprites?


I don't know, I hate dealing with sprites, it just not worth it in my opinion, the time you spent on every edit...


If you're using photoshop you can create a PSD that sources other PSDs and if I remember right create an action that generates the exported image so you could automate things quite a bit if not entirely.


Furthermore the image used is one that compresses uncommonly well with PNG (small palette, large chunks of solid color). I think the vast majority of 350x400 images would be at least 10x larger unless they're deliberately composed in a similar style or are JPEGs with the quality turned way down.

I tried to create an SVG version to see how an SVGZ would compare, but evidently I'm too crap at Inkscape and kept screwing it up.


> Base64 encoded files are larger than the original.

This is one of the reasons to discourage using large attachments on emails (which then stick around forever).


Dlang forum (with dynamic content) is insanely fast! https://forum.dlang.org/group/general


Yeah, this is the site that made me realize just how awful so many other sites are.

It's very fast indeed, and has no useless graphics or javascript effects, but is 100% functional and looks great.


I was thinking of exactly the same website.

That's a very fast page, that actually does something.


A VPS is shared hosting to me, it's just an instance on a shared system. Shared hosting used to mean a folder on a shared web server but I consider sharing resources in a hypervisor equally shared. ;)

If they truly wanted speed through control of resources they would have used bare metal.

But yeah, the website is easy to optimize when it's simple, the hard part, often outside of your control, is DNS and actual connection handling. Many have already mentioned CDN so there's that.

But you also don't know what kind of firewalls are being used, or switches, or whatever else may impact your site. Why not just do what others have suggested and put it all in the cloud so that Amazon can worry about balancing your load.


Pretty good at 97/100 on Google's PageSpeed Insights - https://developers.google.com/speed/pagespeed/insights/?url=...


Don't really think PageSpeed score really accurately reflects page loading speed (maybe initial page loading speed). It seems to not really care about lazy loaded resources as one of my JS heavy webapps I made (around 200KB) actually scores higher than this one https://developers.google.com/speed/pagespeed/insights/?url=.... Funnily enough the screenshot on the test only shows the loading spinner.


Interesting exercise, in an age where web pages are now bigger than most business applications I used to use in the early days of DOS/Windows.

Note: Just checked, and even a simple Medium blog post page won't fit on one those old 3.5" floppy disks..

EDIT: To stay on topic - the OP's page loaded instantly for me here in outback Australia...


"Look amazing on any device" ... The right edge of your text is coiled on my phone (not so amazing).


Same here.


Ok I'll bite as this is near and dear to my heart. Instead of showing me a fast webpage with a minimal content, tell me how to make my tons of css and js load fast! That's a real problem.I deliver web apps, and interactivity is a must.

IMO, the real problem with the web is the horrendous design choices and delivery of very popular news and daily reading sites (ahem cnn) where subsequent loads of ads and videos start shifting the page up and down even when you have started reading something. Let's address that problem first!


> tell me how to make my tons of css and js load fast

I went to the doctor and he told me to lose weight. What a fatphobe!

He should have told me how to eat everything I desire without any bad side effects!

/s



For speed optimization it's really important to always fine-tune for you particular use case and apply some common sense. For instance, inlining everything as suggested here is faster only if you expect visitors to open just that one page and bounce away, so browser caching is not helpful. Consequently, it's a very good tip for e.g. landing pages, but it makes no sense at all to serve pages that way to your logged-in users.


Few more possible optimizations:

- Brotli instead of Gzip. Likely saves around 10% size.

- Minify everything, including HTML. Could save around 3% size on that page.


Another optimization: - zopfli for PNG via advanceCOMP


Submit to 10k Apart: https://a-k-apart.com/


How much does HTTP/2 mitigate the need for such techniques, if at all?


HTTP/2 with server push will eliminate the inlining hacks, and automatically compress content.

But the other points remain: No Javascript is still the fastest Javascript framework, and while you can do lots of crazy hacks with CSS, maybe you shouldn't.


Inline all your CSS and you are forcing all your users a full reload whenever you need to change/add something.

This can be tricky if your page grows in complexity/size and you need to change something.

Please, when is more appropriate don't inline your CSS and prefer to take advantage of cache.


Well, the entire page is 7 KB.


I think it feels fast because it loads at once, but I'm actually not getting very impressive results programmatically if you measure how long the entire TCP transaction takes (which is what I consider page loading):

    # Both DNS records are cached before request
    >>> print requests.get('https://varvy.com/pagespeed/wicked-fast.html').elapsed.microseconds
    226515
    >>> print requests.get('http://www.google.com').elapsed.microseconds
    92027
Even google.com (92 ms) is about 250% faster than OP (226 ms) to establish connection, read all of the data, and close.


But what's 130ms?

Okay, okay, it "matters". But it's nothing compared with the 3s to load all the JS and CSS and the subsequent sluggishness as 20 analytics scripts are loaded and processed.


Where the time goes is the TLS handshake, which is not here when loading cleartext Google.


Honestly? I am surprised to see this page with such high vote on the first page. If you really wanted a fast "static" page, you would put it on a CDN. All you wanted to do is put a marketing link in your last paragraph.


Your page can be very fast and uses minimal resources and is hosted in a good place. But you always gotta watch out for proximity to user, time to first byte and dns resolution time. Perceived speed is highly affected by those.

It took 2 seconds to load the page on a fresh ec2 box:

    time_namelookup:  0.061
       time_connect:  0.100
    time_appconnect:  0.223
   time_pretransfer:  0.223
      time_redirect:  0.000
 time_starttransfer:  1.935
                    ----------
         time_total:  2.066


Yes you are. You're so fast I don't even see you refresh.


Probably wouldn't make much of a difference, but there is still room for performance improvement by minifying the HTML page.


"No Javascript"

Amen.


It reminds me the "no synthesizers" used in the 80's by some musicians :-)


I know right? I cannot really for the super-cool bloatware fashion craze JS frameworks to die.

Here's an idea: WebAssembly, but use existing Opcodes from the JVM.


JS bloat is not gonna go away


You can do much better! What's about html-muncher for css class minification?

Those png are not fully optimized and an SVG would probably even be smaller too and even if it isn't in the case of the orange one it would have could be compressed much better.

Making use of data: urls might look good on first visit but honestly with HTTP/2 just push in the resources and externalize them.

Because seriously cache for 300 seconds? How about offline support anyways? It's 2016.

Furthermore where's my beloved Brotli support?

By the what's about WebP support? Ok TBH if the PNG would be properly optimized WebP would actually not beat the file size but hey: "It isn't"

So even though it's only this tiny static page there's still so much wrong with it. Please improve! By the way what's about QUIC?


Easy to make a website fast when it has nothing on it. In the real world a site isn't this light. It has images, analytic scripts, stylesheets, fonts, Javascript (jQuery at the least). Using a combination of a CDN and realistic caching, I can make a fast website as well.


Real world sites don't need all that stuff.


Many real world sites have strategic/marketing partners who ask to add analytics scripts so that they can capture metrics for their partnerships with you/your real world site...And if your site has been optimized, but their servers (which serve up these 3rd-party/analytics scripts) aren't as optimized, guess where the slow down comes from? And senior leaders don't always enforce these partners to confirm to internal performance standards. So, yes, unfortunately real world sites DO have - or at least are forced to have - stuff like that.


Simple text, few links to 'tipps', a little bit of base64 images without any deeper knowledge. For example there was a website which showed the impact of base64 images just a few weeks ago (when i remember correctly)

But it has a referral link.

Thats probably the point of this page.


This is odd. Clearly anyone can make a lighting fast page by making a single page since then you can have css inlined versus needing to link to css style sheets with multiple pages, and of course not having javascript would make it faster, but thats a requirement for most all typical sites these days, and loading images that way is nice for hackers but not for real people using cms's by common people and clients. Also paying $25-35 for hosting is not very bright since you can get a $5 digital ocean server ssd, not shared, that would load this particular page just as fast if not faster.


His affiliate link for VPS service has its cheapest option priced at $25 a month. You can get a nice little VPS for static hosting on SSD from digital ocean for $5 a month. $6 a month with backup.


You can go even cheaper.

Time4VPS offers you 2 cores (compared to one on DO), 80 GB SSD (compared to 20 on DO), 2 TB bandwidth (compared to one on DO), and 2 GB of RAM (compared to 512 MB on DO) for 3 euros (3.36 dollars). 1 additional euro for daily and weekly backups.

Started renting one just two days ago, so I can't really guarantee that it's reliable, but it was recommended to me by a friend who's renting it for over 100 days now without any downtime.

[1] https://www.time4vps.eu/pricing/ (or, if it sounds good and you want to use my referral link: https://billing.time4vps.eu/?affid=992)


CloudAtCost works perfectly for me. It's a one-time fee for VPS' and they're always having sales.

Use my coupon code to get 50% off any CloudPRO hosting: e6a8yWuhA4

https://cloudatcost.com


Actually for VPS nodes we use SSD cached 6 x 1200 GB 10k RPM SAS RAID6 storage.


I can't wait for half of this advice to become obsolete with HTTP2


What an arrogance, the page is done with me? I done with the page yet. I can get the same page much faster by putting the png in an inline svg, strip the source of unnecessary whitespace and returns, serve brotli (or sdhc compressed pages) with firefox, chrome and opera dynamically... or even just do the decompression inline with javascript. Might save another 20% https://github.com/cscott/compressjs


... With what are you going to compress the compression library?


Brotli or Gzip on server. But you are right, in my enthousiasm I overlooked those bits!


I can see in the source code that you're expressing all dimensions in terms of ems and %s. A technology such as Bootstrap will always be the way to go; however, could you tell us a little bit more about how you did this? How did you ensure that it looks good not only on your screen but on any screen?

I know people are saying it has some errors on certain mobile devices, but that's still some pretty good job manipulating CSS properties.


A technology such as Bootstrap will always be the way to go

What? Why?


Bootstrap was just an example. It could Bootstrap, Materialize, W3.CSS, etc.

The point is that it's much more convenient to reuse code from a framework, because it's better to sacrifice file size for fast iteration and functionality.


There is always a tipping point. We tend to use Bootstrap a lot at work for the reasons you mention, but you can pretty quickly get to the point where your CSS is complex enough that you would have been better off doing it from scratch. All frameworks are like that -- you trade off initial convenience for design constraints.

When I'm doing my own projects I always write my CSS by hand because it ends up less complex in the end. I don't need to see pretty things up front like my corporate customers do.


The whole hosting issue seems to open a can of worm, at least if this comment stream is any indication. I think it probably would have been better if they stated something more along the lines of, 'Choose (and likely expect to pay) for some sort of superior hosting solution which will prioritize allocating resources to your site(s)'.

The general point could be made without leaving so much room for everyone to argue over specifics.



Lol, thanks for the Three Rivers throwback! (Pittsburgh native here)


How dare HN spread this kind of speed-shaming hate slander! ;)

We need to see the bloaty-positive alternative, not all websites have to be Google models.


A lot of this stuff is outdated now: https://news.ycombinator.com/item?id=12448539

For instance, delivery one giant JS/CSS file is now bad because it is harder to cache, since HTTP/2 removes the overhead of multiple requests there is no downside for many files.


This took almost 10 seconds to load for me...


Around the same for me, running fibre in New Zealand. Long delay before content even began loading - as mentioned in other comments, would likely have been a non-issue if a decent CDN was used.


I'm curious, how much benefit is there to having a fibre connection in NZ except for NZ websites? What's the max speed?


The best "Shift+Reload" refresh I've managed to get out of this page from where I'm sitting, in Firefox 48.0.x, according to its Network Console, is around 360 ms. It doesn't beat this HN discussion page by a whole lot, and this has actual content, which is dynamic.


Interestingly, Google has been going after this with AMP (accelerated mobile pages): https://www.ampproject.org/

It enforces a set of rules to accelerate web pages. These rules can be used to validate your pages.


Well, many of these points make sense.

If I'm doing a single page application, surely I'll have infrastructure in place already to compile, minify and do whatever I need to. So I could just serve the monolithic page and be done with it. Much like desktop applications used to do.


A couple of problems rendering on iPhone 6s

http://i.imgur.com/EpoC9lG.jpg

http://i.imgur.com/qHS5v2H.jpg


If it's really all static, you can bundle it into a static Mirage unikernel image with https://github.com/mirage/mirage-seal


I've always wanted to play with putting /var/www into a ramdisk for PHP/html stuff. Would be much faster loading since it's all just text in the end of the day. Completely cut out the bottleneck of SSd/HDD


If you own the hardware I imagine much of your PHP/html stuff will be served from the file system cache much of the time so you probably wouldn't see much benefit...


The idea would be to do it in an ultra-minimalist setting on a VPS (something under 256MB of ram).


I don't think it would do much there either: If it fits in your left-over RAM, then it's probably in the disk cache. If it doesn't, then you can't create a RAM disk large enough.

It might help with latency for the long-tail of data that isn't used very often and thus maybe replaced in the cache by other data, but on the other hand the OS probably had a reason to replace it and forcing it to stay in RAM might slow everything down.


You'll likely have better luck configuring php-fpm and the OpCache properly, as they are going to be doing basically what you're proposing anyway, and for the OpCache it's even nicer as it avoids reparsing the code


Yea I've already set those up but I don't think opcache handles random text/html/js files. Only php files.

I could be wrong.


2.43s TTFB for me - nice and fast once that happened, but that TTFB is a killer.


Maybe a lot of people are hitting it, but this webpage loaded slowly for me.


I'm curious - would this page see any speed improvement with HTTP2? I ask because the new protocol seems optimized for the exact opposite of this - many asynchronous fetches.


It's already using HTTP/2.


I was a fast webpage.


Did you play tag as a kid?


Loaded in about 15-20 seconds for me. Even if you think medium.com is slow, they can handle the sudden extra load that your site couldn't.


By the site's own admission, this page's visible content not prioritized. I would have knelt before it if not for that flaw!!


It took me several seconds to load (compared to about 1-1.5 for HN)... this page needs better hosting for Asian users.


Ego aside, this kind of site (and associated commentary on the suggested tactics) i feel is helpful.


All being on the html, and doing less external css improve speed? How much? Is it worthwhile?


This simple webpage was barely faster than hacker news' list view...


Well, not including any javascript was one massive shortcut :)


In an ad-free Internet, many more pages would be this fast.

Alas.


Took about 15 seconds to load for me...


Really cool!

Almost instant even here in New Zealand!


what is that `.unit{display:inline-block;display:inline;zoom:1` (the stars..)


now we need a framework that targets that standard. as very fast dumb client.


Some pages have big egos.


Its back to 1985


The bit about being hosted on SSDs is silly. I could host that site in unused registers of my CPU.


Oh, I think he just added that one so he could sneak in his affiliate link.


What? No you couldn't. The site html is 11092 bytes, so you'd need to have 1,387 unused 64-bit registers to host the site.


was thinking exactly this, keep loaded in mem for the duration of the server's lifetime. Not too familiar with HTTP2 but could you cache the compressed packet and reuse with minor modification to the headers when needed to speed up the communication?


Preformatted payload can be a big win for page speed, especially if your payload cannot vary based on request headers, or has only a few variants.

A special case of preformatted response used to be baked into microsoft IIS. If you connected to an address that could only redirect to another address, IIS wouldn't even wait for the request, it would just send the 302 response and hang up. This, it turns out, was not really compatible with Mozilla at the time, and may have violated some RFCs, but I kind of liked it as a hack.


Not familiar either, but that's an amazing idea


> I make no external calls, everything needed to load this page is contained in the HTML.

Wont that make your webpage load slower?


That one file, but presumably it's loading assets that would be the same total size if they were split. Loading 100k from one source is faster than the same aggregate size from multiple connections.


Except if you only need the first 50k to render the page, and can wait for 50k of Javascript to come later, your page is going to display a lot faster. Standard technique.

> Loading 100k from one source is faster than the same aggregate size from multiple connections.

Usually doing things in parallel is faster than doing them serial. That's why HTTP2 loads slower than HTTP1 - you are sucking everything through single TCP pipe, even though it is multiplexing within.


Inline CSS shudder




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: