Love it, feels like browsing the web about 20 years ago. (Or a few bad websites now…)
So this is an example of Cumulative Layout Shift (CLS)[0]. It is a metric (coined and named by Google) which indicates how much “movement” there is during the initial render of a page, caused by reflowing the layout. Ideally you have very little or none of it for better UX. I believe it can have an impact on organic search ranking and potentially ad rank.
The most common cause is images included without width and height attributes, therefore the page has to reflow after the image has been downloaded.
Other things that can cause it are multiple css files overriding each other and loading slowly, or JavaScript effecting the layout after it has loaded.
You can use Google Lighthouse [1] to measure this (and other things) on a site.
feels like browsing the web about 20 years ago. (Or a few bad websites now…)
A few? You an I must use very different web sites.
Both of the too-big-to-fail bank web sites I went to today have this problem.
My health insurance web site does this.
I just checked, and unless you're on a really fast connection and a newish computer, Starbucks, Instructables, Pinion Coffee, my local honey farm, Target, PetSmart, and Chewy, Amazon, Twitter, and PetCo all do this.
Check out https://nypost.com/ - shifts and redraws constantly, freezes completely needing page killed on a regular basis, consumes high while completely idle, makes iPhone hot if left open on NYPost page (swear their ad network must be mining crypto or something), routine usage crashes iOS browsers. No additional css needed!
Not only the NYPost, I have an old gen1 or gen2 iPad and it can't render many modern pages. The browser crashes when it tries to load the ads on many pages. Still perfect for reading HN though.
Definitely - it’s when their advertising is loading that all the problems manifest, and it’s the advertising that hammers the cpu once it is loaded, on a desktop where I can filter everything out it’s just another site, on iOS where there is no filtering it causes nothing but problems. Do I want to read about local news and probably lose all my tabs when the browser crashes, or avoid the site and keep my tabs another day? Their app has the same issue, I’m sure it’s just a thin wrapper around the iOS web widget (does not even seem to cache or pre-load content), it freezes solid for minutes while loading advertising and pounds the cpu as long as it’s in the foreground.
This definitely hits a nerve. Google does that nowadays and I cannot count how many times I have clicked by mistake on these suggested links that show up after all results have already loaded.
It's hilarious to me how much Google is hammering on content layout shift as a web vital for ranking websites when they're one of the most annoying offenders. I can't tell you how often I have accidentally clicked that "People also search for" box that sometimes loads in beneath the first result, when I am actually trying to click the second result.
It's infuriating, and makes me wonder if anyone at Google actually uses their own product.
I feel like this is a symptom of google etc not having enough real competition. If google and twitter were scared for their survival they wouldn't allow slack like janky sites and disrespecting the free speech of their users as much as twitter does.
It happens when you click on a result, then hit the back button and want to click the next link. Then, just before you click, the suggested links appear.
Maybe they measured clicks and thought this must be a really useful feature.
It's not an accident at all. I'm sure some sites are moving content around to make it likely you'll click on an ad. Other forms of click fraud are rife:
and when you try to close them you will frequently click on the ad instead of the tiny close button. Back in the day the Scientologists would have reported him for click fraud but today they are a shadow of the terror they once were.
Reminds me of the time someone put a string concatenation function behind a server as a dependency just to see how many unskilled programmers would connect to it
Google does this by popping up a bunch of "related links" as soon as you're trying to click on the second link... I honestly thought all those well-paid engineers and UX designers who work there could make something better than that
I wonder if that’s the same team responsible for making embedded youtube videos start playing on mobile safari if your finger touches it while you’re just trying to scroll.
> I honestly thought all those well-paid engineers and UX designers who work there could make something better than that
Don't forget that you are not Google's customers. Their advertisers are their customers. You're just a product they sell. So the UX does exactly what it is supposed to do: get eyeballs on advertisements.
Even if you click back at Google's scale inadvertent clicks are big money. Besides you seeing something advertised/promoted they got some free tracking data to sell. So long as whatever they linked to managed to load some tracking pixels or even full tracking scripts before you hit the back button that's more tracking data hoovered up. Even just the request for the resource is data for tracking purposes.
This is yet another reason why I’m glad to have switched away from using Google search. Not even for privacy/advertising/tracking considerations, just general quality of life things like being able to click the whole search result bounding box (not just the top link), never having to suffer with related links appearing on back navigation, no AMP pages, etc.
I am sure they can fix it but the fix is going to bring the revenue down so some heads can get chopped. Who wants to put their CV "improved Google search results user experience by increasing the ad revenue by -3%"
A lot of this happens due to a basic lack of understanding of how documents flow at best, pure negligence at worst. Ads and lazy loaded content are being dropped into elements without min heights causing everything to just suddenly shift down. The other thing some of these sites do is take raw lazy loaded json and render a whole view (who would have thought, an on-demand slideshow is jarring if you didn’t account for how it’s gonna just be plopped onto the page). They are also dynamically formatting dates, author names, titles, plopping in client side a/b testing, etc.
Forgive me for saying this but I find all these complaints shallow, or a sign of youth.
Hear me out, though. The reasons are real. The modern web sucks bad but it's been like that for many, many years. Certainly longer than a decade.
From my point of view, as someone who writes js for a living, I started using NoScript a while ago and the web is generally very fast and nothing shifts position on my screen as things (ads) load. Yeah, I don't do social media and some sites are nothing but a blank page. It's a blessing.
I guess my point is: avoid doing things that annoy you. If you don't like js heavy sites - stop using them. Or just adopt NoScript. No, ublock isn't the exact same thing. The granularity is counter productive in this respect.
We've been using this in production for about six months, with various reactions from users. The results are mixed -- some really like it but others have sent us terrifying emails and we've lost two support staff in the great resignation over it. Your mileage may vary.
Some build techniques simply concatenate scripts instead of transpiling etc. And there is no telling how the previous script ended (or not). I seem to remember this is done to mitigate some issues that can arise from poorly written scripts that come before this one.
I do love thinking about all the tree shaking, minifiying and bundling that went into slimming down a website as my browser struggles to decode a few MB of json and makes 20 async calls before fully rendering the screen.
These 20 async calls don't actually need to reestablish the connection nowadays, so you don't actually get a significant performance penalty anymore for doing multiple requests.
Unless the requests are triggered after another request finished of course. That's still significantly impacting loading times
> These 20 async calls don't actually need to reestablish the connection nowadays, so you don't actually get a significant performance penalty anymore for doing multiple requests.
Because in practice, the 20 async calls are never actually made in parallel. The site makes one call, waits for it to finish, downloads a few more scripts, makes another call, parses the results of that call using an accidentally-quadratic-time algorithm, sends a bunch of tracking events over WebSockets and waits for them to finish, makes another call, uses that call to trigger a full-page React/Redux re-render, schedules another call in an asynchronous render effect handler...
I used to run without an ad blocker to support the websites I visited, but then too many would rev up my CPU and freeze my browser just because of all the ads they loaded. The experience of clicking a link started to become unbearable, and I went back to blocking ads.
I know that circa 2018 the ad industry suffered a shock as major players started measuring more precisely what value ads brought, and CPC went down, I think things just got worse at this point because websites crammed more ads in response.
Reddit and Medium are awful for this. I don't know what they're doing, but every page has a small chance of infinitely looping to max out a CPU core forever, unless you block their ads. I'll feel my laptop heating up and know that some Reddit thread somewhere is to blame.
I've had to dial-back my ad blocker over the last six months or so.
More and more web sites stall, or refuse to load at all with ad blockers enabled. Even ad-free ones like government web sites, because they use the same tracking mechanisms.
Whenever I do this, I think of the old joke about the prison where all of the prisoners knew the same bunch of jokes by heart, so instead of telling a whole joke, they would just call out a number and everyone would crack up.
You should not have to look this one up: xkcd 624.
Later that day when everybody is at their cells some other guy yells: "194". And,as usual, all people laugh.
Another one says "315". Even more laughter. Another: "873", the place is a mad house.
Emboldened and wishing to fit in, the new guy yells: "242".
Total silence. Not even a little chuckle. Embarrassed the new guy turns toward his cellmate and asks him: "What did I do wrong?,Isnt 242 a good joke?". His cellmate says: " it is indeed a good joke, it is hilarious.But you see, son, a joke is not just the story, the most important part is the way you tell it"
I wasn't trying to tell that joke--I figured that everyone here knew it already. Maybe there was someone who hadn't, though! So I'm appreciative that you helped them enjoy the joke. Xkcd 1053.
Yeah this is something people love to gloss over. On good 5G the connection latency might be 15-20ms to some topographically "close" server. Realistically that's an absolute minimum and you'll see 50-100ms latency to actual servers.
When work depends on those resources it can't even start until all the data is returned. Far too often nothing can happen until a case are of resources gets returned. Then there's the fun outlier of some slow AdTech script that finally returns, screws with the DOM, and causes a layout and repaint.
It's just today's "What Andy Giveth, Bill Taketh". Great job all around everyone.
So this is an example of Cumulative Layout Shift (CLS)[0]. It is a metric (coined and named by Google) which indicates how much “movement” there is during the initial render of a page, caused by reflowing the layout. Ideally you have very little or none of it for better UX. I believe it can have an impact on organic search ranking and potentially ad rank.
The most common cause is images included without width and height attributes, therefore the page has to reflow after the image has been downloaded.
Other things that can cause it are multiple css files overriding each other and loading slowly, or JavaScript effecting the layout after it has loaded.
You can use Google Lighthouse [1] to measure this (and other things) on a site.
0: https://web.dev/cls/
1: https://web.dev/measure/