Hacker News new | past | comments | ask | show | jobs | submit login

A lot of serious security advisories affecting closed source programs often don't become exploits in the wild due to the nature of closed source and the impied secrecy involved.

For example, IE had a major, major issue regarding SChannel that more or less trivially gave attackers an ability to run arbitary code at either admin or SYSTEM level. It was scary. It was reported privately and then patched. Never in this process did anyone have the source code to analyze and publish an early PoC like they did shellshock and heartbleed. When the patch was released, it was a binary, so no one could just compare the old code to the new and figure out exactly what the problem was and launch an attack. Sure, they could analyze the binary, but that gives limited and often unusable results. Or at the very least puts up enough barriers to buy time for patch installs.

Its funny, years ago we used to worry about our Windows servers, now only worry about our linux servers. FOSS's transparency is ugly when it comes to exploits because they go from discovered to in the wild very, very quickly. Even when they don't, once the patch is released, the hackers have the exploit instantly, and that means if your organization can't patch for a couple hours, you're screwed. The recent Drupal exploit is a good example of this. It went from published to bots hacking Drupal installs within seven hours. Millions of sites were affected.




"it was a binary, so no one could just compare the old code to the new and figure out exactly what the problem was and launch an attack"

Sorry but this is plain false, people doing vulnerability research for closed source software do compare the binaries to understand the patch.


You're right, but there's no question that being somewhat vague with the patch details and producing only a changed binary means attackers will be slowed down a bit from producing a functional exploit. Even if you're only buying yourself maybe ~6-24 hours, that might mean N extra hours for millions of machines to patch.


This is the debate about Full/Responsible/Non-Disclosure. With Full Disclosure, users, admins and pirates alike get the same information at the same time, meaning that you, user might be able to protect yourself (add a line to your WAF filter, add a block on your FW, etc.).

On the other hand, I note that proprietary software is flawed with tons of 0 day (I'm thinking about Flash lately), whereas the self-proclaimed most security-oriented open-source projects only have a tiny number of unsafe code (I'm thinking about OpenBSD "Only 2 remote holes in the default install, in a heck of a long time!")


>Only 2 remote holes in the default install

Except no one uses the default install and these types of claims just incentivize making the default as sparse as possible. Things change when you deploy your stack, use ssl, etc, etc.


You're actually making the argument that Windows is now secure and Linux isn't, because Linux gets attention around security events? I'm afraid that seems outlandish to me.


I think he is talking about smaller and less mature projects than Linux or BSD variants. Say you're a small developer who's spent years on a new platform that will power your social network with millions of people. You might want to keep it closed source for a few years and only release the source code to a select number of people until you've let security researchers take cracks a it. Once it's had a few years of battle testing, you can release it. As opposed to releasing the source code to your social network and making it 100x easier for anyone to come up with an exploit against your network.

In fact I'd say it might be better that a relatively new app (especially ANY APP that powers servers) to remain closed source until given the green light by security researchers. And even then...

Imagine if Facebook open sourced the code running their social network? I guess the question could equally be ... is there ever a social good from centralizing your social network and not letting it be distributed across all machines in the world?

I would say security.


I don't think Facebook's business would be at risk at all if they open sourced most of the backend code. As long they kept things like spam/malware detections closed. Facebook's main value proposition is the existing network. Another network could possibly overtake them some day, but they're definitely not going to be doing it with Facebook's source; not even a heavy fork of that source.


Right. But imagine if Zuck opensourced the exact facebook source from day 1 or day 100. It would have been hacked a long time ago to the point of collapse, and who would that have benefited?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: