It wouldn't be reasonable on a 4 MB system, but this isn't a 4 MB system, so how's that relevant to anything?
Why do you have 16 GB of RAM or whatever in your laptop if not for applications to use?
> allocation is the enemy of speed
But the complaint wasn't the volume of allocation - it was the size of the working set.
Does reducing that working set perhaps consume more power than allowing it to sit at the current level? For example an in-memory cache of something could save power.
I think it's very naive and simplistic to just complain about memory consumption in isolation. The memory is there as a tool to be used.
> Why do you have 16 GB of RAM or whatever in your laptop if not for applications to use?
This line of thinking is _why_ I have 16 GB of RAM to use. I have an "old" machine -- a 4GB Mac Mini from barely a few years ago. It's now almost unusable with some apps. It's like a kind of software inflation. What used to run fine is now an impossibility I'd briefly recall in a dream.
I have 16GB of RAM on my work computer so my compiler can process hundreds of MB worth of source files into a fully functional program in less than 5 seconds. I have it on my home computer so I can run a fully simulated 3D world inside a videogame at 60FPS. I don't have it so some coder somewhere doesn't have to add lazy-loading to their file-syncing app.
Yes. Dropbox built its reputation on being a reliable, set-it-and-forget-it background service that kept your files synchronized. Applications that occupy user attention and serve the explicit purpose of the user may make a variety of dubious cases for reckless memory consumption, but it's a lot harder to justify for something that's supposed to be invisible.
This way of thinking works as long as you have very few bloated apps.
It stops working once it affects literally everything. 1password? Few hundred MB (unless it goes crazy into few GB sometimes). Browser? A GB or so. Some electron apps? Another GB. It really adds up, but no single app will take responsibility because "what else are you going to use the memory for?"
I'm sure it's the wrong place to put the trade-off all the time on memory, because we already see that failing. Few people have the 16gb systems mentioned above. It's the reason ripcord (https://cancel.fm/ripcord/) can exist and charge money even though slack client is available for free.
I expect time-to-market is the trade-off in this case, but I'm not even sure about that. (For example ripcord is written by one person - how many web developers does slack have?)
It’s the wrong place to put the trade off because I highly doubt I’m receiving more value from Dropbox using 500MB as compared to 100MB. Unlike what VCs are telling Dropbox, I only want something to sync files from one computer to another and it most certainly is not the center of my workflow.
In my opinion, it reeks of lazy engineering brought about my PMs who want to turn Dropbox into something it isn’t.
The discussion is about dropbox, a background service that used to have much more reasonable memory use and for which has competitors using far less memory. A trade-off isn't necessary.
I don’t really understand your question. I do spend, invest, and gift the money I make and I wouldn’t bother making it if didn’t need and want to do those things.
And you’ve already spent the money on your RAM. Why do you want it to sit idle when it could be being used to, for example, decrease sync time in Dropbox?
To reduce memory consumption of Dropbox costs money. Either via development time at Dropbox (so an increase in your subscription fee), via power drawn at your socket, or perhaps somewhere else.
Why are people here so utterly convinced that reducing memory consumption is the right place to spend money to get the best value? What do they know that I don’t?
I don't know how you do it but when sending data from a file, I read chunks of the file and send the chunk I have read, not the entire thing. These chunks can be very small, particulary as the MTU over a network is something like 1500 bytes (unless using jumbo frames). So to sync 10 files in tandem (10 threads), they would need 10 * 1500 bytes for the read buffers (+ overhead for storing the file pointers). Even you can see that it is a tiny amount of RAM required for this.
Or are you somehow living in a world where your internet upload speed is somehow faster than reading from disk or memory, and your network interface has to wait for reading from disk/memory???
Are they reading the synced files entirely into memory or something stupid like that?
It wouldn't be reasonable on a 4 MB system, but this isn't a 4 MB system, so how's that relevant to anything?
Why do you have 16 GB of RAM or whatever in your laptop if not for applications to use?
> allocation is the enemy of speed
But the complaint wasn't the volume of allocation - it was the size of the working set.
Does reducing that working set perhaps consume more power than allowing it to sit at the current level? For example an in-memory cache of something could save power.
I think it's very naive and simplistic to just complain about memory consumption in isolation. The memory is there as a tool to be used.