Hacker News new | past | comments | ask | show | jobs | submit login

This is true if your end goal is to have a super fast program but that is very rarely the case. The GTA online loading times issues went unnoticed for years because Rockstar just didn't care that the loading times were long. Users still played the game and spent a ton of money.

Performance hotspots often are the difference between acceptable and unacceptable performance. I'm sure I'm not the only person who has seen that be the case many times.




I don't think people understand the ways that we have adapted to delays. At least once a month I complain about how when we were kids, commercials were when you went for a pee break or to get a snack. There was no pause button. Bing watching on streaming always means you have to interrupt or wait twenty five minutes.

I suspect if you spied on a bunch of GTA players you'd find them launching the game and then going to the fridge, rather than the other way around.


And it's not impossible at all (even if perhaps not actually likely) that their entire microtransaction business would run noticeably worse if players were able to jump in and out of a session in an instant. That fridge run during launch? It's a sunken cost that should better be worth it. Now they are committed.

(edit: reading some other parts it turns out the experiment has actually been made, console versions that never had the startup bug are apparently doing fine)


>This is true if your end goal is to have a super fast program but that is very rarely the case.

This is true in some banal sense, but kind of misses the point that there are certain domains where high performance software is a given, and in other domains it may rarely be important. If you're working on games, certain types of financial systems, autonomous vehicles, operating systems, etc. then high performance is critical and something you need to think about quite literally from day one.


> This is true in some banal sense, but kind of misses the point that there are certain domains where high performance software is a given

I work in a field where we're trying to squeeze the maximum amount of juice out of a fixed amount of compute (the hardware we're using only gets a rev every couple of years). My background (MSc + past work) was in primarily distributed systems performance analysis, and we definitely designed our system from day one to have an architecture that could support high performance.

The GP's comment irks me. There are so many tools I use day-to-day that are ancillary to the work I do where the performance is absolutely miserable. I stare at them in disbelief. I'm processing 500MB/s of high resolution image data on about 30W in my primary system. How the hell does it take 5 seconds for a friggin' email to load in a local application? How does it take 3 seconds for a password search dialog to open when I click on it? How does WhatsApp consume the same amount of memory as QGIS loaded up with hundreds of geoprojected high-resolution images?

I agree that many systems don't require maximum-throughput heavy optimization, but there's a spectrum here and it's infuriating to me how far left on that spectrum a lot of applications are.


I feel the same frustration. I work in a field with stupendously tight latency constraints and am shocked by the disparity vs how much work we fit into tiny deadlines, vs how horrifyingly slow gui software written by well resourced mega corporations is.

It feels to me like user interfaces are somehow not considered high-performance applications because they aren't doing super-high-throughput stuff, they're "just a gui", they're running on a phone, etc. All of that is true but it misses that guis are latency/determinism sensitive applications.

I remember hearing some quote about how Apple was the only software company that systematically measured response time on their GUIs, and I'd believe it because my apple products are by far the snappiest and most responsive computing devices I have (the only thing that even competes is a very beefy desktop).


Yeah, exactly, like... we're doing microsecond-precise high-bandwidth imaging and processing it real-time (not in the Hard Real-Time sense, but in the "we don't have enough RAM to buffer more than a couple of seconds worth of frames and we don't post-process it after the fact" real-time sense) with a team of... 3-5 or so dedicated to the end-to-end flow from photons to ML engine to disk. The ML models themselves are a different team that we just have to bonk once in a while if they do something that hurts throughput too badly.

I'm sure we'd be bored as hell working on UI performance optimization, but if we could gamify it somehow... :D


I'm now in a new position which requires me to interface with Microsoft products regularly; outlook, teams, etc, are exactly what you say; why does it takes 5 seconds to search for a locally cached email, meanwhile ripgrep has parsed my entire drive in about the same order of magnitude.


GTA5/online is based on one of the most sophisticated and highest performance gaming engines developed at the time (the RAGE engine).

The infamous loading bug [1] is something that only happened on the PC release of GTA5, which among all platforms was the least popular and came out two years after the console release.

Given how many modern games have downright failed due to performance issues/frame rate issues plaguing the game on release, I can assure you that had GTA5 released with significant performance issues it's quite likely that the reviews for the game would have bombed and it would have sold considerably less than it did.

[1] https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...


I mentioned GTA online because the article mentions it.


Wasn't the loading time getting worse with time as items where added to the master item list too? I.e. initial testing was ok?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: