I know you're in this for the satire, but it's less about the webapps needing the memory and more about the content - that's why I mentioned video editing webapps.
For video editing, 4GiB of completely uncompressed 1080p video in memory is only 86 frames, or about 3-4 seconds of video. You can certainly optimize this, and it's rare to handle fully uncompressed video, but there are situations where you do need to buffer this into memory. It's why most modern video editing machines are sold with 64-128GB of memory.
In the case of Figma, we have files with over a million layers. If each layer takes 4kb of memory, we're suddenly at the limit even if the webapp is infinitely optimal.
Apparently with 24 bytes per pixel instead of bits :)
Although to be fair, there's HDR+ and DV, so probably 4(RGBA/YUVA) floats per pixel, which is pretty close..
Let's see. I can cache the information that example.com is valid up to May 31 2026, but then how do I know that it gets revoked on any day before that date?
And if I cache the information that it is revoked, how do I know that it's allowed again?
I could check, let's say one time per day even if I don't access that site.
In any case I'm still leaking which domains I browse and I keep trusting cached certificates until the next check.
On the other side, with short lived certificates I would be trusting a certificate for a longer time, until it expires.
Downloading a list of all certificates and their status from every CAs is probably unfeasible.
It seems that we can't escape a tradeoff between privacy and security.
I bet they'll phase it out and try to force their worse service, wherein your data is stored on their servers, like they tried to do with PINs. It took enormous pushback to get them to stop mandatory PINs, and even then they made it nagware for a year or two.
I didn't trust their rationale about PINs and remote attestation somehow meaning your data is secured by a small passphrase, just like I won't trust them to not remove a useful and existing feature I already rely on for backups.
Also not mentioned, they designed their existing backup solution to require reverse-engineered community solutions to actually access your data; I have to use a Github project to unencrypt the backup and export my chats, which is something I've never had to do with any other messenger.
From your link, I wish they would answer this, and they've been asked numerous times, and to my knowledge have avoided the question (which is very concerning to me):
>This is excellent news! Will there also be official documentation on the backup format, potentially even official tooling like signalbackup-tools[0] to access/parse backups offline? I'm asking because, having used Signal/TextSecure for 10 years now, my backups are worth a lot to me (obviously) and there have been times when I would have liked to mine & process my backed-up data. (Extract media from conversations in an automated manner, build a more elaborate search, …)
I'm like that poster and backup all my chats obsessively, since way back in the day, and experienced a period with Signal where it was impossible for me to access my own data because of their position.
Took me a really long time to realize that I should scroll. Because why would I? There is absolutely no indication that there is anything to scroll to.
I clicked on the two avatars but that didn't get me very far and the only thing left to click was "by alvin chang" but that was about as fruitful as I imagined it would be.
So I assumed it was a podcast, re-checking that I had audio on etc. But nope, so I checked another browser. Same there... Then I read HN comments, ah ... Great design? ...
Same here — once you get the scrolling part it's pretty great, but like you I was stuck at the top for a while. A downwards-pointing arrow on the hero would help a lot here.
Firefox in Windows has the tiniest little scrollbar indicator in the top right that honestly blends in very well with the background. I didn't realize I needed to scroll until I came to the comments. I clicked around... got some interaction... but basically left the first time being very confused.
I have Firefox on macOS as well, but I don't see a scroll bar until I start scrolling. Could be because I'm using an external trackpad, and not a mouse.
I was going to say that somehow I knew I had to scroll the first time I entered. But I went back after reading your comment and I have no idea how did I find out the first time, there is no indication that there is content bellow.
I was viewing on desktop and the blank space all around made it immediately feel like an article that required a scroll to view the content below the fold.
Seeing the timestamps change as I scrolled and seeing a progress "bar" update within the speech balloons during the dialogs made it more obvious I just had to scroll to see the content change.
I do think the progress bar color is low contrast enough that some might not see it and not realize they have to scroll to cause the dialog to update, though.
> Took me a really long time to realize that I should scroll. Because why would I? There is absolutely no indication that there is anything to scroll to.
> I clicked on the two avatars but that didn't get me very far and the only thing left to click was "by alvin chang" but that was about as fruitful as I imagined it would be.
Thank god, I wasn’t the only one, just posted a similar comment here.
A random macOS binary is more likely to run on another macOS install from anytime in the last half decade than a Linux binary on the same distribution.
Even Apple’s famously fast deprecation is like rock by comparison.
I'm not sure why you think this is a good metric; the space of "random Mac binaries" is far smaller. There's probably something to be said for this "curation," but you pay for it, both literally with money and in limited selection.
I don’t know; you don’t think having Win32 be the unofficial API is a problem?
It literally means Windows will always exist - as the preferred IDE and Reference Spec for the Linux desktop. It also means all evolution of Linux will be ironically constrained by Win32 compatibility requirements.
Your vision have motion blur. Staring at your screen at fixed distance and no movement is highly unrealistic and allows you to see crisp 4k images no matter the content. This results in a cartoonish experience because it mimics nothing in real life.
Now you do have the normal problem that the designers of the game/movie can't know for sure what part of the image you are focusing on (my pet peeve with 3D movies) since that affects where and how you would perceive the blur.
Also have the problem of overuse or using it to mask other issues, or just as an artistic choice.
But it makes total sense to invest in a high refresh display with quick pixel transitions to reduce blur, and then selectively add motion blur back artificially.
Turning it off is akin to cranking up the brightness to 400% because otherwise you can't make out details in the dark parts off the game ... thats the point.
But if you prefer it off then go ahead, games are meant to be enjoyed!
Your eyes do not have built-in motion blur. If they are accurately tracking a moving object, it will not be seen as blurry. Artifically adding motion blur breaks this.
Sure they do, the moving object in focus will not have motion blur but the surroundings will. Motion blur is not indiscriminately adding blur everywhere.
> Motion blur is not indiscriminately adding blur everywhere.
Motion blur in games is inaccurate and exaggerated and isn’t close to presenting any kind of “realism.”
My surroundings might have blur, but I don’t move my vision in the same way a 3d camera is controlled in game, so in the “same” circumstances I do not see the blur you do when moving a camera in 3d space in a game. My eyes jump from point to point, meaning the image I see is clear and blur free. When I’m tracking a single point, that point remains perfectly clear whilst sure, outside of that the surroundings blur.
However motion blur in games does can literally not replicate either of these realities, it just adds a smear on top of a smear on top of a smear.
So given both are unrealistic, I’d appreciate the one that’s far closer to how I actually see which is the one without yet another layer of blur. Modern displays add blur, modern rendering techniques add more, I don't need EVEN more added on top with in-game blur on top of that.
It is just centralizing the web. You can do a lot with a $4 droplet if a single board computer isn't your cup of tea. Not "buying" into icloud/cloudfare is alone worth that cost. Also much more meaningful stack to learn.
Nothing against the post/author, I just feel the creativity to "exploit" features of the giants that is put in place just to undermine alternatives is misplaced.
You're removing autonomy from the support team, this will demoralize them.
The issue becomes, you have two teams, one moving fast, adding new features, often nonsensical to the support team, and the second one cleaning up afterward. Being in clean-up crew ain't fun at all.
This builds up resentment, i.e. "Why are they doing this?".
EDIT: If you make it so support team approval is necessary for feature team, you'll remove autonomy from feature team, causing resentment in their ranks (i.e. "Why are they slowing us down? We need this to hit our KPIs!").
Some 20+ years ago we solved this by leapfrogging.
Team A does majority of new features in major release N.
Team B for N+1.
Team A for N+2.
Team A maintains N until N+1 ships.
Team B maintains N+1 until N+2 ships.
Sounds about right. Guess 512 GiB menory is the minimum to read email nowadays.