Hacker News new | past | comments | ask | show | jobs | submit login

I'm not quite sure where you're getting that 25% reduction in cores from. Yes, it'd probably be a pretty bad tradeoff to dedicate 25% of the machine to busy-looping on IO just to reduce the IO overhead by a factor of 2. But that's not at all what I'm suggesting. I think there's a reasonable case to be made that application performance measured on two systems with exactly the same hardware would be 2x higher when the IO is moved into user-space.



Kind of depends on the application, wouldn't you say? For some applications, absolutely not. For some applications, maybe yes, assuming the application programmer knows enough not to negate the advantage e.g. by creating lock contention or cache thrashing. And they know enough to use something like LKL (interestingly also from Intel) instead of reinventing their own filesystem-like layer on top of that raw storage. And they don't make huge security blunders. Because if they don't get all of those things right, it doesn't matter if their performance is 2x for one brief shining moment before everything goes to hell. That's not comparing apples to apples. You have to hold the functionality/quality bar constant or else it's meaningless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: