Hacker News new | past | comments | ask | show | jobs | submit login

What kind of workload will do that?



a build server recompiling multiple branches over and over in response to changes.


And logging all of those unit tests associated with all of those builds (and rolling over those logs with TRACE level debugging).

Every build gets fully tested at maximum TRACE logging, so that anyone who looks at the build / test later can search the logs for the bug.

8TBs of storage is a $200 hard drive. Fill it up, save everything. Buy 5 hard drives, copy data across them redundantly with ZFS and stuff.

1TBs of SSD storage is $100 (for 4GBps) to $200 (for 7GBps). Copy data from the hard drive array on the build server to the local workstation as you try to debug what went wrong in some unit test.


My mind was blown when I had to send a fully logged 5 minutes of system operation to a friend for diagnostics (MacOS 11 on an M1 Mini). He wasn't joking when he said don't go over a few minutes because the main 256GB system drive almost ran out of space in that time. After getting it compressed down from 80GB and sent over I got my mind blown again when he explained he had to move to his workstation with 512+gb of ram just to open the damn file.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: