Hacker News new | past | comments | ask | show | jobs | submit login

So one of the most wonderful things about relying on their proprietary closed source operating system is that you can't have external code audits. You just kind of wait for ethical people to come forward and explain bugs they've found and wonder, 1, how long has it been there, 2, how long have bad actors known about this, 3, how many other bugs are just like this or worse that they haven't found yet, 4, do I need to recreate VM images or can I trust the internal patch process to get it installed before I've been exploited, 5, does the patch actually fix the underlying security flaw or is it something they're calling a "feature" now that will always be an issue... I'm so grateful to not be a janitor for Microsoft Windows software anymore.



You're mixing a lot of things for no reason, the problem you describe really have very little or even nothing with open source or proprietary or even OSes.

Points 2/3/4 are exactly the same on other OSes, even open sources ones.

Point 1 might be easier to answer by yourself/someone who is not the vendor with open source OSes, while for Windows or OSX you depend on the vendor to tell you with certitude "starting with X" (which they always do). But on the other hand the centralized and streamlined patching model makes it much much easier to identify just which patch caused it, compared to "which level of package mainter or upstream caused it, is it a flaw in SOFT or in debian's SOFT-up3 or what ?"

Point 5 has nothing to do with open source either, on either you can easily test if it's fixed or not. Whether it's considered bug of feature-wont-fix is pretty much always answered so you don't have to actually ask yourself (but if they do consider it normal then you can't fix it yourself on closed source proprietary, though they usually give you a config change to get what you want).


> You just kind of wait for ethical people to come forward and explain bugs they've found

And the same apply to open source software. It's not like all the bugs in open source software was fixed in audits or that you somehow magically know how long time the issue has been attacked by bad actors.


Microsoft Windows is proprietary software yes, but they have something called the Shared Source Initiative.

> Through the Shared Source Initiative Microsoft licenses product source code to qualified customers, enterprises, governments, and partners for debugging and reference purposes.

https://www.microsoft.com/en-us/sharedsource/

I say this as someone who doesn’t like Windows and doesn’t run Windows. We still need to admit that Microsoft does indeed let others read the source code, only that they decide who gets to read it and not.


The problem is that it would be dangerous for any FOSS developer to be chosen among those who can see their sources for obvious legal reason. Anyone willing to be exposed to Microsoft's IP and NDAs that way is probably already so tied to them that we couldn't count on any independent security auditing and reporting without Microsoft authorizing it.


The key question is: would they let people who want to find bugs? Because that is the point here, if you can read the software but not allowed to do an audit, it doesn't make any difference (for the issue that we're discussing).


Can you clarify the distinction? They share the source code so that other people can do auditing, obviously. But what would be the scenario where you are allowed to read the code, but you're not allowed to look for issues? Have you ever seen that set up anywhere? It would not make any sense.


See for example the Enterprise Source Licensing Program page https://www.microsoft.com/en-us/sharedsource/enterprise-sour...

Allowed purposes for said licensing program includes “performing internal security audits of the Microsoft Windows operating system”.


OpenSSL code audits having been great, hence why it is such a good example of FOSS secure software.


But why, yes. OpenSSL has seen vast improvements, not just in code, but also in processes, and multiple audits due to Heartbleed.


After how many years of deployment into production?


If you're asking me personally, OpenSSL always had a funny smell even at the time, and so did TLS, simply because it seemed all way too complicated. TLS v1.3 agrees. As far as TLS implementations go I think pretty much all of them have had major, critical flaws. Microsoft's SChannel has had an RCE since it was born, patched the same year as Heartbleed, Apple's Secure Transport had goto fail (also in 2014 if I recall) etc.


But that didn't answer his question.


Microsoft can easily pay for external software audits. They just need them to sign an NDA or other agreement that the access to code is only to be used to audit the code, and nothing else.


At Microsoft's size it may make more sense to just hire an auditing team who works internally.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: