Hacker News new | past | comments | ask | show | jobs | submit login

I am this person. I work as a researcher finding 0-days.

From the employee perspective: Wages are equal. Big Tech work is less interesting (build big bug finding machines that find have high quantity of bugs) and report the bugs that sit into some bug tracker only to maybe be fixed in 3 months. Offensive security work is more interesting. It requires intimate knowledge of the systems you research, since you only need a handful and the shallow ones get found by Big Tech. You must go deep. Additionally offensive security requires the know-how to go from vulnerability to code execution. Exploitation is not an easy task. I can't explain why engineers work for companies that I deem immoral, but that's probably because they don't feel the same way as I do.

From the employer perspective: How much does the rate of X vulnerabilities per year cost me? If our code has bugs but is still considered the securest code on the market, it may not benefit the company to increase the security budget. If the company expands the security budget then which division is getting cut because of it, and what is the net result to the company health?

If you want to fix the vulnerabilities you need to make the price of finding and exploiting them higher than the people buying them can afford. And you must keep the price higher as advances in offensive security work to lower the price of finding and exploiting them. Since defensive companies don't primarily make money from preventing bugs and offensive companies do primarily make money by finding bugs, there is a mismatch. The ultimate vulnerability in a company, or any entity, is finite resources.




I mean wages might be equal (are they, though? Big tech pays a lot as you go up) on average but there’s a lot of difference in how they pay out. Big tech usually provides compensation bands where your salary is pretty stable. Vulnerability research frequently has your compensation hinge on your performance to a much larger extent.


Would amount of critical vulnerabilities be lower if we sacrifice some performance? 10-20%?


Yes, this has been a trend for a little while now. For example this gist[1] gives linux boot parameters to make linux significantly faster and all it does is basically turn off all default security mitigations. I would make the distinction between vulnerabilities and "exploitable" vulnerabilities though. Mitigations usually give a runtime performance hit but don't remove the underlying flaws, it can just make it harder, or sometimes impossible, to escalate a little flaw into full blown code execution. But also know that offensive techniques advance along side defensive ones. For example ASLR was once considered the death of vulnerability research, but new methods and ideas were found and bypassing ASLR is now just part of the job. Each mitigation must be regularly evaluated against the state of the art, and against the cost to performance (and complexity, etc.). You ideally don't want to be paying performance costs when they aren't helping security.

Rust, Zig, and others, are additionally paying compile time performance costs to remove some underlying vulnerabilities. Which is interesting and probably a good thing for software.

[1] https://gist.github.com/jfeilbach/f06bb8408626383a083f68276f...


thanks


You don't need to cut any division, profits can also change


Most investors back companies with a kind of “extraction” mindset. They only want to Solve The Problem in so far as they can turn that into a stream of income.

Why would they give that to the employees to improve The Solution? It’s already solved as far as The Market is concerned. That would be Bloat




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: