Hacker News new | past | comments | ask | show | jobs | submit login
Nations Buying as Hackers Sell Computer Flaws (nytimes.com)
95 points by stfu on July 13, 2013 | hide | past | favorite | 32 comments



Responsible disclosure is one of the worst things to happen to the information security industry, and the sale of exploits is similarly terrible. Modern-day white hats have effectively turned into 'green hats' whose goals are no longer compatible with the previously-held ethical hacking belief system.

Personally I feel that the sale of exploits should be outlawed, or at least specifically to customers have no intentions of fixing the exploit (buying a hackers silence), or to governments that are intent on conducting clandestine/offensive operations.

I also agree with the EFF's stance on exploit sales, and that is that they should be front-and-center in any and all national cyber-security debates:

https://www.eff.org/deeplinks/2012/03/zero-day-exploit-sales...

Also on Schneier.com: http://www.schneier.com/blog/archives/2012/06/the_vulnerabil...

No I'm not arguing for zero disclosure or that security researchers shouldn't be compensated for their hard work, I just think that the people and organizations that sell these exploits are effectively complicit in the destruction resulting from these exploits.

The world of information security is slowly but surely moving towards a Cold War scenario where nation-states are fostering sustained and escalating tensions, especially where they maintain a stockpile of exploits to be deployed in war-time scenarios.


I don't think exploit sales could ever be made illegal in the US. Courts have already ruled that code is speech [1], and the Constitution puts restrictions on how the government can limit speech. First 0-day exploits are illegal, then The New York Times.

Exploit sales are basically a byproduct of living in a free society. If you want them to go away, find the exploits yourself and post them to full-disclosure. Or pay someone to.

[1] http://en.wikipedia.org/wiki/Bernstein_v._United_States


This is the best argument against making them illegal, in my opinion. If you believe source code, code implementing cryptography, and privacy software like Tor and OTR is speech, then you can't set a double-standard just because you don't like what exploits may or may not be sold to do.


Publishing exploits should certainly be protected as free speech, but the sale of exploits to a party with ill intent (governments included, even ours) is moving into the realm of arms dealing because that exploit is going to be weaponized. Intent matters.

If there is a transaction with weaponization as the intent => arms dealing

If it is published to edify => free speech


By that logic, having a perl RSA implementation in your signature is arms dealing because the intent is for that speech to spread to others to weaponize it into encryption software. Reality already won that battle, let's not fight it again.


Can you legally sell exploits of physical systems? Say, a book containing instructions for breaking into any military installation?

Maybe it would be enough if an international treaty required all nations, and their intelligence and law enforcement arms, to abstain from using software exploits, and requiring them to disclose any exploit information they acquire to system vendors for prompt correction.


Probably.

A more recent example: can you sell the NSA's confidential slides? The Washington Post apparently can.


We already make certain practical exceptions to free speech that take intent into account: slander, blackmail, death threats, shouting fire in a theater, etc.

Though I'm dubious about creating (more) wedges that could erode constitutional rights, outlawing only the selling of exploits seems reasonable. Whereas if you tell the world about a zero day exploit for free, it falls back under free speech.


Using the exploits is already illegal.

The examples you list are really not related to sharing of ideas. The criminality of the above acts centers around the harm caused, not the actual act of speech. The problem with shouting fire in a crowded theater is not that you vocalized the word "fire", it's that you caused unnecessary panic. If you had pulled the fire alarm, the same panic would have arisen, but without any speech involved. So it's clear that just because you used your body's built-in fire alarm to cause false panic, and the Constitution protects you from government intervention in most uses of your body's built-in fire alarm (speech), you're not exempt from the consequences of inciting panic. Blackmail and death threats are pretty much the same thing as shouting fire in a crowded theater. Society has an obligation to protect its members from harm, and when you give an "early warning" that you intend to harm someone, it makes sense for society to use that information to intervene in advance of that harm actually occurring.

Slander and libel are very tricky, mostly involving civil penalties rather than criminal penalties. If you slander someone and it causes them no damages, the government is not going to throw you in prison. I've never liked the slander/libel exceptions and did a bit of reading; Wikipedia's article on the subject says: "In a 2012 ruling on a complaint filed by a broadcaster who had been imprisoned for violating Philippine libel law, the United Nations Commission on Human Rights held that the criminalization of libel violates freedom of expression and is inconsistent with Article 19 of the International Covenant on Civil and Political Rights."

I think this is basically the right idea. Look how much time legitimate authors spend defending themselves from libel claims in the UK. Spreading this madness to the US for the perceived benefit of not being able to tell someone how to get a computer program to write to unallocated memory seems pretty stupid to me.

I wonder what happens today when you sell an exploit to someone that then uses it to cause significant monetary damage. If we're being consistent in the application of our laws, it will be the same thing that happens to the company that manufactures the weapons used in school shootings.


You could criminalize parts of the transaction without criminalizing speech. For example, there's no Constitutional right to buy someone's silence, so you could make it illegal to give someone money on the condition that they not disclose security exploits to other parties. That would preserve the freedom to disclose exploits, the freedom to disclose exploits for pay, and the freedom to be silent, taking away only the freedom to buy someone else's silence about software defects (and implicitly the right to take money in exchange for silence about software defects.)


Pay in installments, installments ends when big is found, whether reveled by creator or other.


Let's play devil's advocate for a moment.

Why should selling exploits be illegal?

If I spend many weeks looking for an exploit and I find one, what do I get by disclosing it, to say, Google? The Chrome project does pay a little, but I could probably get more in a bidding war. If Google wants the exploit, it should be able to outbid the other people who are willing to buy it.

Time is money after all, and by disclosing an exploit I worked so hard to find I'm basically giving it all away to Google... and for what? For ethical reasons? What ethical reasons? Google is not a person, it's a corporation.


People use Chrome, that is the ethical issue. As long as we're not pretending that we're ethical hackers then that's fine with me. I find it extremely hypocritical to claim to be an ethical hacker and sell exploits at the same time.


Well, people look for exploits in popular software, so pretty much anything that is being exploited is being used by people. It's Google's job to secure Chrome, not the people out there that are looking for exploits.

Why would you work for free (or almost for free)? Do you not need to pay for food & shelter? Frankly, I don't see why finding an exploit and then selling it to someone is unethical. (By the way, you can replace Google by Microsoft or your other favorite corporation)


I swear I'm not trying to be rude or snarky here, but do you also believe that manufacturing and selling weapons (guns, ammo, aircraft carriers, whatnot) should be illegal?


If you take that analogy seriously, a lot of current exploit sales would already be illegal, even without restricting weapons sales further (which I personally think should be done, but that's another discussion).

For example, you cannot generally export weapons to another country without a license. In the U.S. you cannot even sell weapons across state lines without a license. Many countries also restrict the sale of even low-level devices likely to be used for sabotage, e.g. molotov cocktails, and that may extend to marginal sabotage devices like spray paint as well. Burglary tools are another example of a restricted device that has some analogies.

Lots of what the "security industry" does would land them in jail in any other field, even if it were selling low-level burglary devices to car thieves. Whether it's a good thing it doesn't, I'm not sure. The people taking money for exploits in a "don't ask what they'll do with it" kind of arrangement have a pretty dark shade of gray on their hats.


Right, so why is nobody instead saying that exploit sales should be regulated? Why are most people drawing such stark distinctions between exploit weapons and meatspace weapons?


Meatspace weapons are easy to detect since most/all modes of transportation between nations is monitored and recorded, meanwhile exploit weapons are quite impossible to manage in the same fashion.

Decentralized and obfuscated means of communication make it easy to engage in this behavior at scale, for example Tor and Bitcoin.

Meatspace weapons of significant destructive capabilities are also large and bulky, too.


That doesn't explain why they need to be completely outlawed rather than more strictly controlled, or some other medium.

Also, if exploits are "quite impossible" to manage like regular weapons(and I agree that is probably the case, see how crypto export controls worked out), it goes both ways - any laws passed outlawing trafficking in weapons will be impossible to enforce.


I agree that they would be impossible to enforce, but there are quite a few nations across the world that have achieved minimal gun ownership and relatively high levels of safety.

They have cultures that emphasize an open and diplomatic dialog with the community, and they support trust and reciprocation between law enforcement and common people. Meanwhile the most heavily armed citizenry see it as an existential necessity, which breeds distrust and fear.

Consider for a moment the simple and logical decision-making processes between two logical individuals of equal threat to each-other but deeply suspicious -- if one advocates disarmament then the other will view it with suspicion. In actuality the danger is non-existent, but (for example) an ignorant third-party under the protection of one would screech at the very idea of deescalation.

My point is that weapons control is as much about perceived safety as it is about defensive and offensive capabilities.


No, absolutely not. There are many instances in history where fear and uncertainty were used as political tools to disarm and otherwise control 'untrustworthy' or 'unreliable' people. I'm sure you can imagine what usually follows that line of thought.

I have a hard time reconciling that with my feelings about information security and I can see how this would contradict my statement above, but I have different feelings about information security and physical security (ie: in information security there is a very fine line between offensive and defensive capabilities). You and anybody else are welcome to convince me otherwise.


Exploits are not weapons, and these are very very strictly regulated.

Do you think you should be able to disclaim any responsibility after you sold information about faulty material used in bridges all over the world to third-parties with unknown intentions?


"...Personally I feel that the sale of exploits should be outlawed...to governments that are intent on conducting clandestine/offensive operations..."

You expect governments...

to outlaw the sale of exploits....

to themselves?


Yes, absolutely. If we're going to shout to the world that we have a moral high-ground then we should put our money where our mouth is, otherwise we're just two-faced crooks and liars.


I'm surprised Endgame Systems isn't mentioned. As far as I can discern, they're one of the largest purveyors of zero-day exploits to the IC.

They sell a target-mapping tool known as Bonesaw, and for a couple million a year you can upgrade it with zero-day exploit packs for every region on the globe. It's system compromise made point-and-click.

EDIT: Should have read the second page. I'll leave it up as a testament to my haphazard reading ability.


It was interesting to see that some of our own silicon valley folks are on Endgame's board. Notably, David Cowan from Bessemer and Ted Schlein from Kleiner Perkins.


Absolutely. I'm DC based, so it's more "your" than "our."

But you'd be equally surprised to see the flow of money that moves in the opposite direction. In-Q-Tel, the non-profit venture capital firm of the CIA, has invested in many Valley companies including Facebook and Palantir.

Now, I've heard that In-Q-Tel funding does not imply any obligation toward government interests in the future. However, from an organization who's stated mission is "specifically to help companies add capabilities needed by our customers in the Intelligence Community," such relationships pose interesting questions.


Exploits are not weapons, they are research. They are the discovery of the correct sequence of bytes that someone else's program has been written to accept as input and perform actions that the author (probably) did not intend. Exploits are not any act of doing this, they are the knowledge of how to do it.

There is no physical analogy. A bomb doesn't care what its target is, it's an indiscriminate destructive force. You cannot do something similar with an exploit. The exploit only has context within the rules set forth by the programmer of the target software.

Just because I know which inputs you accept that cause unintended behavior, that you put there and I had nothing to do with, does not mean I have "created" something, positive or negative. Fundamentally, exploits are not created, they are discovered.

This discovery process is expensive, and it's easy for you to say that it is ethical for me to share what I've found for free. I think that's arrogant and disrespectful, and I also think that criminalizing my knowledge is far more dystopian than what you think my knowledge might be used for if shared with someone else.

I don't like what some governments are doing with them or that my tax money is feeding the industrial complex, but such a fundamental attack on freedom cannot be tolerated.


Apparently Regulation of exploits has made it into the 2014 NDAA: http://blog.erratasec.com/2013/07/thanks-eff-for-outlawing-c...


Quite frankly I also blame the criminalization of responsible bug reporting.

For example, Barret Brown ( http://www.democracynow.org/2013/7/11/jailed_journalist_barr... ) is facing jail time.

Others have tried to report security issues and been threaten or charged with hacking. ( I will let you google for examples - there are quite a bit)

At DefCon there have been cases where presenters were threatened with charges for reporting security issues.

Selling the vuln "solves" the problem for the responsible hacker.

I know if I found a bug - I would be unlikely to report it - too dangerous.


It's sad but accurate. The safest route for the reporter is to sell it to people who want to attack their enemies, the vendor wont bully them, administrators wont attack them and you're shielded from criticism.

It's unfortunately common for administrators to believe security is the state of perpetual ignorance of vulnerabilities that affect them, if Windows Update says "No updates available" then you're secure. If you tear them from that blissful ignorance, then they get angry and shoot the messenger.

Security researchers do not introduce insecurity, they expose it. This is apparently a very hard concept to grasp for a lot of people.


Barret Brown wasn't really involved in bug hunting. He is charged with posting a link to an email dump (hacked by someone else) that contained financial data among other things.

You're probably thinking of weev, although what he didn't wasn't exactly "responsible bug hunting either", if you follow the common definition of that term.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: