Hacker News new | past | comments | ask | show | jobs | submit login

That inefficient network has better latency than your computer when trying to show you a pixel: <http://newstmobilephone.blogspot.com/2012/05/john-carmack-ex...>



Only that such a network call can't replace the pixel output.

It just adds up to the overall latency.

Also real latency of web-pages is measured in seconds these days. People are happy when they're able to serve a request in under 0.2 sec.


Fifteen years ago I used to target 15ms as seen in the browser F12 network trace (not as recorded on the server!) and if I mention such a thing these days people are flabbergasted.

For example, I had a support call with Azure asking them why the latency between Azure App Service and Azure SQL was as high as 13ms, and they asked me if my target user base was "high frequency traders" or somesuch.

They just could not believe that I was expecting sub-1ms latencies as a normal thing for a database response.


> if I mention such a thing these days people are flabbergasted

I think I'm just learning this the hard way, given the down-votes of the initial comment. :-)

Maybe people really don't see the issue with adding layers after layers of stuff, and that we've reached, no, surpassed even, some tragicomic point? Computers are thousands of times faster, yet the end-user experience becomes more sluggish with every year passing. We have an issue, I would say. And it's actually not even funny any more.


I worked at a customer that upgraded to major new MySQL version and boosted the hw to 4x more performant (ram+cores). Result: their average transaction time climbed from 0.1ms to 0.2ms and all support personnel started complaining since the their software started taking 4 seconds to load a new screen instead of 2 seconds. The support systems operated on raw data with no cache to show the actual state.

Managed to fix the problem and restore performance by suggesting we disable the MySQL query cache which was slowing down the system with more cores.

And this was nothing to do with banking or high frequency anything. Just code that did lots of small queries to fill a screen full of data to support persons.


> Managed to fix the problem and restore performance by suggesting we disable the MySQL query cache which was slowing down the system with more cores.

This approach for a fix sounds very unintuitive.

Did you manage to understand why this was like that? The explanation is likely interesting!

What I could think of: Context switches across cores constantly invalidated data in the CPU caches. In such a case CPU pinning would help maybe. (Just speculating! I'm not an expert on such things. But I know that the CPU cache, and memory I/O in general, is the single most important topic when dealing with usual performance issues on modern CPUs. Today's CPUs are only fast when they have their data available in their local cache(s). Fetching form RAM is by now like fetching from spinning rust a decade ago; it will kill your performance no mater how fast your CPU is).

Tangentially related: https://news.ycombinator.com/item?id=14888360


The query cache was behind a single lock. Thus while fast, it serialized all DB operations. And the more CPUs you have the slower the performance. Especially since any write has to go through the cache and invalidate all affected queries.

It was meant to accelerate PHP code in the 90s with single core CPUs. It was never designed to be scalable and in-development (back then) versions had already disabled it, which gave me the idea that it was something to try.


Thanks!

Indeed unexpected. But interesting to know.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: