Hacker News new | past | comments | ask | show | jobs | submit login

100GB of storage is… not much for 1000 users. A 1TB NVMe SSD is $60. So 100GB is a total of $6 of storage… or about half a penny per user.

And that’s for SSD storage… an enterprise-grade 14TB hard drive is only $18/TB on Amazon right now, less than a third of the SSD price per TB. Call it 100GB = $2 of storage, total, to enable 1000 users to run the editor of their choice.

So, no, I’m not seeing the problem here.

If you really wanted to penny pinch (one whole penny for every 5 users), I think you could use btrfs deduplication to reduce the storage used.






Very apparently you're counting as a singular physical person. In a big organization there's always an overhead both in time, and money, and more importantly there are more problems to solve than there are resources. So one has to arrange priorities, and just keep low-prio things in check not letting to boil over the lid.

Preventing people from using their preferred tools — tools which are extremely widely used in the real world — does not seem like a useful application of time and effort.

This is for students, not professionals; youngster's ideas of "preferred tools" very easily take a backseat to university requirements.

Usefulness depends on conditions of which we don't know a lot here. Sure, there are situations when counteracting pressure is more expensive than expanding capacity. But frankly I doubt this particular case is one of those.

How many concurrent users can you run off a single NVMe SSD?

How many students leave their coursework to the last minute?

How do you explain that the server went down during the last hour before submission deadline again, and that everyone gets an extension again, because you cheaped out on putting the cheapest possible storage into a system that has to cope with large demand at peak times?

How many students now start to do worse because of the anxiety caused by these repeated outages?

How much more needs to be invested in the university counselling services to account for this uptick in students struggling?


That's ram, not disc.

No… it’s not. To quote the message earlier in the thread, that message said “everyone with >100MB of disk usage on the class server was a VSCode user.”

100MB * 1000 users is how the person I responded to calculated 100GB, which is storage.


He also mentioned 50 node processes, so it would be way higher than 100 MB of RAM, I agree.

Most of the RAM usage would likely just be executable files that are mmap’d from disk.. not “real” RAM usage. But, also, the 1000 users in question wouldn’t all be connected at the same time… and I honestly doubt they would all be assigned to the same server for practical reasons anyways.

It’s not easy to estimate the real RAM usage with back of the napkin math.


Depending on what they're doing, it could easily be multiple Gb per user. When you do VSCode remoting, pretty much everything but the UI is running on the server. This includes stuff like code analysis for autocompletion, which - especially for languages that require type inference to provide useful completions - can consume a lot of RAM, and a fair bit of CPU.

> I honestly doubt they would all be assigned to the same server for practical reasons anyways.

The computer science department at my university had multiple servers. All CS students got an account on the one same server by default. Access was granted to other servers on a case by case basis, based on very course-specific needs.

So yes, in my case, all CS undergrads used the same one server.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: