Hacker News new | past | comments | ask | show | jobs | submit login
Zero-day Exploit Price List (forbes.com/sites/andygreenberg)
208 points by mef on June 16, 2012 | hide | past | favorite | 118 comments



This is a great idea. Directly and publicly monetize security bugs. This is technical debt in terms of real, hard, cold cash.

As a community it makes sense to embrace this. As vendors (and especially people who develop apps for walled gardens) start seeing real-world feedback on platform security, we can all make more informed choices. It also incentivizes the hell out of companies to make their stuff more secure. Terrific concept. This shouldn't be some kind of dark grey-market site. It should be on a web location as visible as E-bay. (owned by somebody with no skin in the game)

For those of you arguing that such information can and does kill people, I feel your pain. But you can't hide knowledge. There will be a market whether or not there's a Forbes article about it. The only difference is whether you know about these vulnerabilities or you don't. A big, public market lets everybody see how crappy the things we use are. A secret, government-controlled market keeps all of that critical information away from the very people who need it. If the Syrian government is using security exploits to kill dissidents, all the more reason to let the sun shine in.


Just as important as the 0days themselves is keeping public who bought them. 0days, in many cases, are weapons. By keeping this market and it's transactions in the open perhaps we can keep the purchasing parties a little less evil, or at least drive up their costs.

This market, to me, looks very similar to the weapons industry. The more open it is the better for everybody. The other commenters are correct, this market isn't going anywhere. We may as well shed light on it.


> Just as important as the 0days themselves is keeping public who bought them.

That sounds ridiculous, I think. As soon as that is made public, there will instantly appear another market where that info is not made public, and most buyers will switch to that new market, sellers will follow as well.


The more public the market is, the less effective it is. WabiSabiLabi tried (and failed) to create an eBay for exploits years ago. The item of value is the information, not necessarily the code. As such, the more information is provided gratis, the less it is worth overall. In theory, a nice principle, but it is directly opposed to the forces that drive the market.


I don't see how you can reveal the buyer. It doesn't take much effort to buy through a proxy entity.


Disclaimer: I know people referenced in this article.

Whatever your views on the morality or ethics surrounding this market, the fact is that it exists and isn't going away. In fact it has existed for a long time (I certainly remember exploits being traded, bought and sold in the early 2000s and 90s) but the thing that's new-ish is the presence of numbers in the public eye.

Charlie Miller's paper on the 0day market[1] provides an example of what happens when someone has a lack of market information (they lowball and sell the bug for less than it's worth) in this space, and might be of interest to people who enjoyed this article.

[1] - http://securityevaluators.com/files/papers/0daymarket.pdf


When there's demand for a product or service, supply will follow if it's profitable(1) to sell. Banning something, or shaming people away from doing it, doesn't eliminate demand. Supply reduces, cost increases. If costs gets too high, demand vanishes, and so does supply. It functions like any other market would.

To squash a market entirely requires extremely strict laws, punishment, and enforcement. Even then, it's impossible to destroy some markets if enforcement cost is too high, or is incompatible with the rules of a society (e.g. it's un-Constitutional).

For example, the U.S. spends billions each year on the war on drugs, and yet there's someone sitting right next to me in a coffee shop I could probably buy some off of right now. If $500 billion was spent yearly on enforcement, and police could do random searches of persons and property at any time without warrant, drugs would dry up. But at what cost? The loss of many of our rights, plus extremely high taxes, followed by an inefficient society spending so many resources on, well, "You can't stick that pill in your mouth." We'd become a military state with little else to offer, stagnating while the rest of the world surpasses us.

Black markets can be risky to engage in. If the risk of getting caught buying or selling an exploit was, say, 24/7/365 physical torture for 10 years, most people probably wouldn't do it. But a few would remain if it is worth the risk to them, or if they fail to assess risk (e.g. they don't comprehend it, or they ignore it; "It won't happen to me!"). The black market would "harden".

Mexican drug cartels hardened with guns, violence, secrecy, corruption, torture and death. You can theoretically calculate how many humans died in Mexican drug wars, per-joint you smoked, in 2011. That cost was built into the price you paid for the drugs. This "death cost" goes away if you legalize it.

So you're absolutely right; markets for software exploits will not go away unless it becomes unprofitable, or not worthwhile. Right now there are few, if any laws, banning it (asides from extortion, treason laws, etc...). Since many are vocally against it, they only have a few options to "prevent" it. Shame those who do it, buy them out ($$$), race them (white hats), or propose legislation to ban it (good luck). This market is likely here to stay for some time. If it remains legalized and becomes accepted by the general community, more people would do it, prices would come down, and so would earnings.

(1) Profitability can be defined as anything the supplier receives in return for their product/service which they deem "worth" something. It doesn't have to be money, but could be good feelings, increased social capital, learned knowledge, etc...


> Mexican drug cartels hardened with guns, violence, secrecy, corruption, torture and death. You can theoretically calculate how many humans died in Mexican drug wars, per-joint you smoked, in 2011. That cost was built into the price you paid for the drugs. This "death cost" goes away if you legalize it.

For the record, you can do this with any product from any industry where fatalities ever occur during production and distribution. For example, the cost of teamster's deaths during delivery has been factored into the cost of products for hundred(s?) of years. Whether that death occurred because of overwork, robbery, or modern day road accidents, it's been accounted for.

People die of heatstroke farming the food you eat and the coffee you drink. And people die getting you the weed you smoke. I'd like to see data comparing them.


You're right, but saying "someone else would do it if I didn't" is a pretty weak rationalization. They're making themselves rich at the expense of everyone else. They're a leech on society.


> They're making themselves rich at the expense of everyone else. They're a leech on society.

No they're not, on both counts. They're not making themselves rich at the expense of everyone else. Their major customers are governments, who are in no rush to make their own purchasing patterns illegal. They're taking part in an active established market. Immunity have been doing this publicly for over a decade, with the difference being that anyone can buy Canvas.

The simple solution (which works in favour of the exploit dealers too btw) is to use a layered approach to defences that make it more expensive to develop an exploit. That's what Microsoft have been doing since Vista. There are now so many hurdles you have to jump through for a server-side remote code execution bug that for most people it's just not worth it (given that you'll have to chain exploits more often than not to bypass protective measures), which is partly why client side bugs are becoming more common.


Eh. Two much more important factors militating for clientside exploits:

* The client-side attack surface is, probably by many orders of magnitude in any metric you care to use, more complex than the serverside attack surface. Look at the kinds of libraries that have been long-term thorns in the sides of developers and security teams --- image codecs, font libraries, compression --- a big chunk of everything that goes on your computer screen can be influenced by attackers.

* The client-side attack surface includes multiple programming languages hooked up to anonymous content (the most important being Javascript), and so clientside exploits have significantly better tools to work with.

Not to take anything away from your point; I'm glad you're injecting some sanity onto these threads.


You're absolutely right on both counts, and thanks for the comment.


On a related note re: client vs. server. Taking a recent incident that was in the news, when the Brits pwned a pro AQ forum. From that vantage point, the best thing they can do is to target the admins, moderators and heavy users -- with client sides. Probably more than one, since it is unlikely that a single exploit would be effective against each of the targets. The valuable intel is going to come off those user's boxes, not off some semi-anonymous VPS shard. Logs of Tor exit nodes, googlebot, and proxies reveal nothing interesting. From a certain perspective, it makes sense that there just isn't much value to be had from servers, and so there's reduced incentive to pay high prices for server exploits.

Not to mention that gaining access to that server would probably be fairly simple given the atrocious security standards of most web hosting companies. CPanel, pilfered ssh key, SQLi, PHP bugs in the forum software, rent a VPS on the same host and LPE... I hardly need to tell _you_ how many alternative (cheap) ways exist to gain access to the server. (And this is assuming that they aren't running their own colo's and web hosts a la http://www.schneier.com/blog/archives/2008/10/clever_counter...)

Given the relative ease of access to servers, the poor quality of intel stored on them, and its no wonder that the market focus is on client sides. Finally, its worth mentioning that most (all?) of the servers with interesting data on them are in the legal jurisdiction of the US (just ask Kimble, ha!). Accessing that data requires a sternly worded letter on official letterhead-- not an exploit.

So, not to detract from either of your' points; but there is another angle to add to the mix.


Well, client-side attacks are great because they typically rely on the naivete or indifference of the user. And the client-side attack surface is typically protected to a lesser degree than a server. A well orchestrated spearfishing attack is tough to defend against, even for a security conscious user. The attack surface is just so large.

However, the meat on the bones is really on the servers. If someone pops my desktop at work, they won't find much valuable data. But they will be able to keylog me, grab admin password hashes, arp-spoof etc. Still, no data. But what they will get enables them to access our company files and databases in short order.

In essence, client-side attacks in the corporate world are definitely targeted at server data, while in the consumer world, they're targeted towards identity theft or botnet creation.


This is the gov world though, where the interesting information is things like your address book, your emails (the content as well as the senders/recipients), your private keys and passwords, etc. etc. Client sides provide direct access to those things (or at least, a means of obtaining them).

There are very few governments that care about what is on your company file server or in your company databases. (Ignoring the elephant in the room on that one.)

Law enforcement agencies keep huge Access databases of the contacts they extract from cell phones taken from criminals. They share this intel with each other via email (I know, I know...). They can discover a great deal about who is involved in an activity and where they are on the totem pole from just this data. Its even possible to identify people by correlating the content of the "name" field and using the phone number is a unique ID. Criminals tend to have poor OPSEC.


I don't think it's safe to assume that government simply means spying on individuals for national security reasons. Governments engage in corporate espionage all the time, and not just China.


In a way, these "leeches" are providing free pen-testing, and publicizing the fact that software is cheap to exploit. If this drives the markets to invest in security software, I think it's a net win.


Could you explain your reasoning a bit more? I am not following from "individual invests thousands of hours into their passion; some are compensated for their work by people who value their skills; those individuals are leeches on society". I think there is a step or twenty in there that you could expand.


They making a explicit decision to reap a larger payday by selling the exploits to governments or other companies rather than disclosing it to the original application authors for the standard bug reward.

The sellers have no way of determining how the exploits will be used. The mere fact that buyers are willing to spend so much on an exploit indicates they are not just collecting them out of idle curiosity. Even we could completely trust the buyers to not misuse or share information about the exploit, the original bug remains unpatched for others to independently discover and exploit.

The sellers are willing to inflict damage on everyone else so they can benefit. That sounds like leeching to me.


> They making a explicit decision to reap a larger payday by selling the exploits to governments or other companies rather than disclosing it to the original application authors for the standard bug reward.

I don't know you but I get the impression that you've never gone through the bug reporting process from a bug hunter's perspective. Some places do offer bug bounties, and of course you have the usual ZDI, pwn2own etc. that you can go through, but from my own personal experience I've been ignored, threatened with legal action and dragged into a quagmire of free IT support because the manager handling the bug won't let me speak to a developer and doesn't understand the bug amongst other things.

On the other hand, finding a bug isn't hard, but developing a reliable weaponised exploit that works repeatably against multiple targets can be a heck of a lot more work.

My own personal view when it comes to disclosure is 'finders keepers'. It's my bug, I found it. It's not worth my time weaponising it to sell on the black market and it's too high risk for me personally to be associated as being active in it, it's only worth weaponising to the point where I can use it in future on pentests and help customers implement workarounds.

> The sellers are willing to inflict damage on everyone else so they can benefit. That sounds like leeching to me.

s/sellers/buyers/


s/sellers/vendors/ ... lets not forget who created the bugs in the first place, then failed to find and removed them, and finally shipped a dangerously malfunctioning product! (Alien Invaders from Mars -- http://www.antipope.org/charlie/blog-static/2010/12/invaders...)

[edit: more pithiness]


Responsible disclosure (where the vendor is notified first) has proven to be an unmitigated disaster. Vendors simply ignore the vulnerability report as long as possible. The only way vulnerabilities get addresses is when a PoC is created and publicized.

Buyers of exploits (at least those who aren't blackhats/criminal enterprises) generally intend to use them for their security services/applications. They have to have the latest exploits otherwise they can't protect their clients.


If someone is willing to spend (e.g.) $1,000,000 on an exploit on the black market, but the software developer is only paying $50 (or nothing!) for people to report exploits, don't you think that something is wrong with this picture?


That's because the costs of a security vulnerability are externalized.


In reality, exploit sellers and exploit buyers are engaged in discovering the value of security exploits. That the value of those exploits might be pinned to unethical, immoral, unlawful, or belligerent conduct is irrelevant; markets have to operate in the real world, and we cannot stipulate that the bad actors absent themselves from the real world.

So while I personally find the sale of exploits distasteful†, I think Soghoian is in the weeds with this argument about exploit developers being "modern merchants of death". Exploits are nothing like conventional munitions. They're extremely scarce and their extraction from software imposes no intrinsic costs on the rest of the world.

In other words: vendors can simply outbid intelligence agencies for their bugs, or, better yet, invest more heavily in countermeasures to moot those bugs. Unlike guns, which can be manufactured so cheaply and at such a scale that no one organization could hope to stem the tide with markets, vendors can stop immoral abuses of their own software simply by participating more actively in the market.

$200,000 sounds like a lot of money, but it's under the cost of one senior headcount at a major software vendor, and vendor cash flows are expressed in high multiples of their total headcount cost. The higher the prices go, the more incented vendors are to stop vulnerabilities at the source.

Even today, the whole technology industry is captivated by the misconception that vulnerabilities somehow cost some fraction --- maybe 1/3, maybe 1/4 --- of a senior full-time dev salary. After all, they're generated by people who would otherwise be occupying that kind of headcount. And for the most part, that misconception has been bankable, because the best exploit developers almost as a rule suck at marketing themselves.

Every other price in the application security field follows from this misconception, from headcounts and org charts at vendors to assessment budgets to shipping schedules for products to the salaries of full-time application security people.

It's all built on a misconception; that misconception creates a market inefficiency; people like (allegedly) The Gruguqhquq are arbitraging on that inefficiency. But the solution to a market inefficiency is to eliminate it, not, as Soghoian implies, to install umpires around it and erect bleachers and a jumbotron so we can watch it more carefully.

I see this story as evidence of chickens coming home to roost, not as some dangerous new ethical lapse on the part of the security industry.

This is an easy moral stance for me to take because I don't invest any serious time into developing exploits for the targets on this price list.


I disagree, the more demand there is for exploits, the more exploits there will be. If there is enough demand for them, we will even start to see employees on the inside of these companies purposely creating them.

Companies do not directly lose money if their products are exploited. How many thousands of exploits have been developed for windows? They're still doing just fine.

Software is buggy and exploitable by it's very nature. The cost to secure a large software project is orders of magnitude higher than the cost to find a flaw and exploit it.

By participating in an open market for exploits and greatly raising demand for them, the government is making us all less secure. "This is why we can't have nice things".


Exploits do not come out of nowhere. They can't be scaled with demand.

The fundamental moral problem with the market isn't the value being imputed to exploits; it's the lack of value imputed to resilient software.


> Exploits do not come out of nowhere. They can't be scaled with demand.

Actually they can and are[1]. Not so much the exploit dev bit, but the bug hunting is getting more automated.

[1] - http://www.scribd.com/doc/55229891/Bug-Shop


The point I'm making is that people have to create defects in the first place. Contrary to some claims on these threads, most code does have a finite amount of exploitable defects.


Ah right, got you now. I was referring to the scalability issue.

Of course the great thing about code defects is that updates are just as good at introducing new bugs if the developers don't have proper security processes in the first place.


The large strategic moves major vendors like Microsoft, Adobe, Google, and especially Apple with the IOS platform seem to be doing a good job of killing whole subclasses of vulnerabilities, and of driving up the cost of exploitation (above and beyond flaw discovery).

Your point about software maintenance introducing a continuous stream of new flaws is well taken, but ultimately I think vendors who take this problem seriously are in a very good position to do something about it.


You're right. The bigger boys are in various stages of getting it together, it's the ones that don't seem to have immediate column-inch impact (Oracle, SAP etc.) that aren't quite there yet, and then you've got everyone else who lack the resources or interest to pull it off.


An again economics is firmly in our corner here, since the effort to build exploits for exotic targets isn't that much less than the effort to target e.g. Android... but the incentive to build those exploits is far lower.


> Exploits do not come out of nowhere. They can't be scaled with demand.

Why not? All large software projects have flaws. Doesn't more demand for exploits mean more people are going to look for and find them?

> The fundamental moral problem with the market isn't the value being imputed to exploits; it's the lack of value imputed to resilient software.

I think it's both. People shouldn't be selling exploits to entities that will use them offensively. And vendors largely don't care about security as much as they should.


More demand does cause more people to look for exploits. But since there's a finite number of vulnerabilities to be extracted from code, I'm not sure how that's relevant.


I do not understand this. You seem to be saying that the more efficient and open the market is, the greater the demand -- even to the point of having employees create holes on purpose (one supposes to be able to sell them?)

But couldn't employees already create holes on purpose to sell them? With an open market, perhaps I know that there is a bug in IE that involves flash and allows easy access to root. It just sold at 4 million bucks. With a closed market, I may suspect the same thing, but I don't really know for sure. The thing is, the vulnerability, the market, the sale, and the exploit still exist regardless of whether I know about it or not. The only question here is whether other people are in on what's going on in the marketplace.

"The cost to secure a large software project is orders of magnitude higher than the cost to find a flaw and exploit it."

Yes, that is the current state of affairs. But the current state of affairs is that there are all sorts of vulnerabilities that the average person doesn't see. It's not the cost to secure a large project, it's the relative cost to the customer base of the exploit versus the current margin to the software provider. That's the way it should have been working all along. If you sell me a product for a buck and it steals my bank account -- or even if there is a one-in-a-million chance of it stealing my bank account -- I'm not buying it. Right now vendors create walled gardens and put everything in there. What probably should be happening is that separate physical devices should handle different types/values of things. My iPad should probably never both run Angry Birds in Space and control my brokerage account. That's simply too many eggs in one basket. Vendors get away with this because they are trying to hide all of the hidden risks. It's my belief that this practice has to stop. Immediately.

Because as technologists we love to generalize we are always trying to create multi-purpose walled gardens. But that's not the way anything else works in the world. My wallet does not also function as a gaming device, something I wave around to exercise with, and a device for meeting girls. I don't take all the physical cash I own to Starbucks and build little towers out of it. We keep things in physically separate areas for a very good reason -- it decreases risk. (And we accept various kinds of risk for various kinds of things) Opening up this market will only cause an evolution that has needed to occur for a decade or more: the end of the general-purpose computer.


But couldn't employees already create holes on purpose to sell them?

Most corporate programmers would have no idea who would buy such a thing or what the right price is. Making that market clear and making transactions easy should increase production. That's what every commodities market does. As an example, consider the Chicago Mercantile Exchange, the early history of which is well described here: http://www.amazon.com/The-Merc-Emergence-Financial-Powerhous...


Thanks for the link. Remarkable Amazon overview: "Nor does the author offer any particularly illuminating perspectives...his frequently fawning account of the CME's origins and first 50 years as an arena for commercial hedgers and venturesome speculators amounts to little more than a family album in which forgotten knaves are as fondly and foolishly remembered as hitherto unheralded princes and their lightweight aides. Remarkable mainly for its consistently graceless style"

Ouch. Remind me never to have that guy review any of my books.

I understand what you are saying, and I understand why it's feasible to keep it small, concealed and restrict participants to certain customers. At least early in the game.

What I'm saying is that this state of affairs is temporary at best. Forbes is out with it. There will be many more articles. The prices are already in the 6-figure range. Soon they'll be at seven figures. No matter what we'd like the market to be like, any programmer with Google access should easily be able to determine he could make himself a millionaire just by releasing a vulnerability into the wild. Whether that information is easy to find right now or not is moot. It'll get easier. We're all connected. Supply meets demand. No amount of wishing it weren't so is going to change any of that. Works this way for illegal drugs, will work this way for security vulnerabilities.

I think the question here is whether to shun, outlaw, shame and hide this kind of stuff or to embrace it. In my opinion, we have enough examples that the first choice doesn't work so well, where the second choice benefits the rest of us even if we find the entire affair distasteful.

But I believe the greater point is that there are so many people affected by this hidden market that keeping information from them should be a crime. Yes, I wish that we could live in a world where we could slap a big old Google, Microsoft, Amazon, or Apple logo on something and know that it is safe. But that world doesn't exist and it's never going to exist. Might as well start living in the world we find ourselves.


Yeah, the book is definitely an in-house history, but it does a fine job of showing how a commodities market emerges and the way it shapes commerce as long as you skim a little.

Illegal drugs is a poor analogy; there are a lot of participants in the market, a lot of small transactions, and it can be a victimless crime. If you are looking to buy a little weed, your friends probably don't care.

A better one is high-end weapons. E.g., missiles. That's a market that's relatively small and obscure, and the prices are high. State actors can get away with trafficking, but individuals run a substantial risk of running across sting operations and other law enforcement activities. Further, as long as the market is widely reviled, random citizens are likely to report suspicious activity.


So long as there's significant penalty to participating as a seller in this market for those actually writing the software then I don't see why it can't be public. I don't think it would be a good idea to allow without penalty someone who is writing the software to be cashing in on million+ dollar bonuses for back doors he writes himself however. That seems analogous to insider trading.

Forfeiture of all money gained as such and a stiff jail sentence should be enough to discourage any but those who are already doing this without public knowledge of the market.


The more recent book _The Futures_ covers the same subject and has most of the same flaws, excepting perhaps that it takes a more measured tone about the schemers who rigged the early commodity markets.


Nonsense. The demand isn't created by governments seeking to purchase the exploits, it's created by criminal organizations willing to use exploits for botnets, identity theft, and other monetization models. That ZDI, and VUPEN etc are purchasing exploits only improves security.

View it like this. The exploits are out there, discovered and used by black hats. The idea that a whitehat security researcher is in anyway decreasing the level of security by selling an exploit to Google, or Apple, or any third party, is silly.

The more security researchers that are evaluating software, the more secure we'll be. Zero days will always be around, simply because the incentives are still favoring the blackhats. Until that's reversed, we'll be playing catch up.

The government buying exploits has nothing to do with why we can't have "nice things." We can't have "nice things" because this is a hard world, with people who won't blink to take advantage of poorly designed and poorly coded software.


I am not an expert economist but I do think suggesting Google simply pay $200k for exploits would fix things is far too simple a solution. It would simply make the exploits even more desirable, and push the rewards of selling on the black market even more lucrative.


Exploits are already desirable. Google paying a respectable sum for them won't make them less desirable.

An exploit for a widely used application, especially a client-side application with a good reputation like Chrome is extremely valuable to someone wanting to create a botnet etc. There's no way to avoid bugs in software, but rewarding security researchers can help mitigate the risk. And security is all about mitigating risk in a cost effective manner.


Exactly this.

If Google started paying $200k to match the spot price on the exploit market, the market would react by pushing the price up. Soon Google would be paying $300k, then $400k.

But at some point, that price appreciation has to stop, because there will stop being counterparties who will see $500k or $700k as a rational price to pay for an exploit.

When that happens, one of the legs of the vulnerability market will get knocked out; the market will have discovered some approximation of the true value of an exploitable security vulnerability (again: that value is based on immoral behavior, but reality doesn't care about that). Google will pay it, because it can, because the final price of exploitable vulnerabilities is certainly a tiny tiny fraction of their total overhead, and because Google has an advantaged long-term position that will enable it to control the supply of exploits and eventually bring its costs back down.


>One of the most vocal of those critics is Chris Soghoian, a privacy activist with the Open Society Foundations, who has described the firms and individuals who sell software exploits as “the modern-day merchants of death” selling “the bullets of cyberwar.”

Are there any documented cases of malware killing someone? All this cyberwarfare stuff seems a little overblown.


Syrian activists are being killed right now and identified, amongst other means, through smartphone spywares.

In Myanmar, this kind of thing has circulated : http://www.crime-research.org/news/05.10.2007/2928/

Another virus, sent to the Dalai Lama's office, was used to track Tibetan sympathizers, probably by the Chinese government.

There are REAL cases where REAL people's life is put in danger. Everyone uses email know and usually do that on insecure platforms.

I almost support the person in this article, but I wished he would sell only to Europe and USA out of ethical concerns instead of simple profit optimization, because otherwise, he would have the same responsibility as an arms dealer selling weapons to Syria.


How much more ethical are Europe and the US?


It may make you laugh and there are serious offenses (cough wikileaks cough) but freedom of speech and of political opinion IS taken seriously, opponents are not "disappeared" on a regular basis in EU and US and they do not outlaw means to circumvent surveillance.


We tend not to bomb our own cities.


I dont think that by itself is enough to claim that one state is more or less ethical than another.

Another metic you could use is who causes the most deaths, in which case the US is pretty high up on the unethical list.


With flame and stuxnet in picture, cyber warfare looks a lot more real and physical. From disabling production of nuclear weapons to extremes of (highly unlikely) blowing up of reactors. That's a little more than a bullet.


I'd still call it less than a bullet. You can also potentially use a wrench to blow up a reactor, but ultimately I wouldn't call a wrench a powerful weapon. (That distinction goes to the reactor.)


I would say the "blame" is transferable indefinitely. The reactor is not responsible either, the enriched uranium is responsible instead, for the damage. But then uranium is not responsible either, the atoms are, that fuse (or fission), or the momentum (of atoms) can also be at blame. There has to be a line that needs to be drawn here.

Wrench can explode a reactor, but it is not made specifically to blow up a reactor. Stuxnet, on the other hand was specifically made to do the damage to fuel cells. Hence it's a weapon and wrench is not.


A wrench is a weapon in the war against loose bolts. However, most people would probably call it a weapon only when wielded with the intention of hitting another person.

I don't think it's fair to characterize the intention of stuxnet as blowing up reactors. From what I've read, the purposeful damage it was designed to inflict was to disable uranium-enriching equipment. I don't recall reading anything about purposeful attempts to use the software to kill or wound.

That's where I'd draw the line: purposeful killing. So I'd describe this as a case of cyber-sabatoge -- not a case of cyber-war.


Here's an attempt at an outline of an argument that Stuxnet was used to kill or wound. Note that I don't necessarily hold this as my belief.

1. Stuxnet was designed to slow Iran's progress toward developing their own nuclear power (and weapons).

2. Nuclear power is a cleaner alternative to burning fossil fuels.

3. Fossil fuels are the cause of many deaths through pollution, mining accidents, and wars over oilfields.

4. Therefore, by delaying Iran's use of nuclear power, Stuxnet resulted in an increase in killing or wounding, via wars over oil and pollution.

That's where I'd draw the line: purposeful killing. So I'd describe this as a case of cyber-sabatoge -- not a case of cyber-war.

Sabotage can be a tactic used in an ongoing war.


Read Richard Clarke's _Cyberwarfare_. There are some tech errata, but it's still a good introduction to the subject.


Depending on your view, Chris Soghoian is a transparency activist fighting for the rights of others, or a bit of a troll who doesn't know what he's talking about. I err towards the latter rather than the former, but YMMV.


Yea, there's far more people working on software actually meant to kill people, like weapon systems.

I think the exploits are probably most useful for spying.


the fact that is hasn't killed anyone is what makes it so effective. if US or Isreal had bombed iranian nuclear facilities it would have likely resulted in many deaths, but stuxnet was effective in setting back the iranian nuclear program without any deaths, and less threat of war.


A good point. If the nytimes article is to be believed, stuxnet was actually used to talk Israel out of performing more physical attacks.

I don't know about anyone else, but I'll take an infected computer over gunfire any day.


The chief Iranian computer scientist in charge of managing Stuxnet was killed by a magnet car bomb attached by a motorcyclist.

http://www.usatoday.com/news/world/story/2012-01-11/iran-nuc...

No exploit > No Stuxnet > No Death


While I think there are clearly cases to be made linking cyber attacks to physical violence and harm, I'm not sure this is one of them. This was a chemist, not a computer scientist, and it's plausible that Stuxnet slowed more violent/coercive efforts against Iranian scientists.


That article mentions Stuxnet once and draws no relationship between the killing and the virus.


Nation States that employ offensive cyber operations will NOT stop at only targeting computer infrastructure. Many technologists/hackers naturaly like to separate the world into two spheres (the so called "real world" and the "online" world), somehow thinking that they are above the physical fray. The truth is that hackers and security professionals on all sides will increasingly expose themselves to physical attacks like this. Any militarily sound employment of cyber warfare will include a physical attack component, whether covert or overt, depending on the current stage of the conflict.


That still sounds like someone using a conventional weapon to kill someone, it's a pretty big stretch to compare malware to a bullet.


Intelligence is a major part of warfare. It's a lot easier to assassinate people if you have good means of finding the people you want dead.


Well, I have no problem with calling it cyberintelligence, or even cyberespionage, but just because espionage is part of warfare doesn't make it warfare.


One time I was working on a MapReduce that processed a lot of XML found on the internet (I can't remember why anymore (edit: I remember now why I can't remember, it was a friend's program that I was helping out on)) and I found it crashing on some input. After some examination of my code I traced it to a bug in libxml (which is also used by Chrome, Safari, and others). I simply reported the bug to the appropriate parties and it got fixed. It's funny to think that the author of that bogus xml file had gotten the syntax wrong enough in a way that would've been worth thousands of dollars!


It's worth noting that not every bug is a security-critical bug. Similarly, not every crash is a security-critical crash.

Sometimes you can exploit a bug to give you something, sometimes it's just a plain old bug.

People only pay for the security-critical ones.


There was a recent article about using random generated programs to crash different C compilers. Then they automatically reduce the size of the programs to make them readable. Some corner cases are difficult to find thinking, and their approach was to find them by brute force.

"57 Small Programs that Crash Compilers" http://blog.regehr.org/archives/696 (14 comments) http://news.ycombinator.com/item?id=3794934


A bug is sometimes just a bug. To be valuable, it has to lead to a vulnerability and ideally to privilege escalation. If it simply crashes the application without the ability to control further program execution, it's useless.


I've always wondered this about vulnerabilities: how can one guarantee an exclusive sale? And why doesn't someone who bought it just go ahead and re-sell it to (multiple) others to make a profit?


As mentioned elsewhere, generally the payment is spread out over months, contingent on the seller keeping their side of the bargain. And the second query describes the business practices of many defense contractors who act as de facto gatekeepers to government contracts.


This implies that "there can be only one." The idea that you could effectively prove that a seller is keeping their side of the bargain also implies that no one else would discover it.


It just means that the seller is incentivized to minimize the number of people that know about the vulnerability. Which is effectively what "exclusivity" actually means, at least in this case.

As an additional point, if either side becomes known as a bad actor in the market, they will severely limit their ability to operate. There is some short term incentive to be dishonest (more money now), but in the long term it removes the ability to earn in the future. Like selling your fishing rod for fish today, tomorrow you'll be hungry again, only now you can't fish. (To butcher a cliche.)

[edit: grammar]


How would you prove that the seller didn't resell the exploit?


Schneier has recently written an excellent analysis of the exploit market. He also links to a ton of interesting related material.

https://www.schneier.com/crypto-gram-1206.html#1


Perhaps this is the explanation for how Flame and Stuxnet had so many zero day exploits: the Feds crowdsourced them through the Grugq.


It's an open secret that they both build them and buy them. Where the stuxnet ones came from is uncertain, but it could have gone either way.

I've heard they'll actually pay for one they already have if they hear a broker selling it. In this case, they are paying to keep it scarce (as these deals come with exclusivity).


Hopefully the result of this revealed information will encourage companies to raise the prices they offer for reporting/fixing security critical updates. It's a shame that the legitimate route pays 1/10th to 1/25th of the alternative. Particularly since the illegitimate route is basically selling coding services to a government agency in a lot of cases (albeit closer to cracker than hacker), which some people do anyways as a part of their normal coding jobs.


Brokering 0days - legal? yes. Risk free? I doubt it.

Grugq makes me think of Gerald Bull.


So is it a safe assumption that all porn sites are riddled with these types of exploits? Excluding generating revenue from these types of exploits, I've never understood how those sites could be sustained given the bandwidth charges those sites must incur.


While porn sites might be riddled with malware it is unlikely that they'll be zero days. More likely you'll see exploits for bugs that are publicly known but not yet patched everywhere.

The value of a zero days is largely rooted in the fact that it hasn't been disclosed publicly and any widespread use of a zero day threatens that value. Zero days will be used when the risk of discovery is very low or the payoff is very high and attacking random people who visit dodgy websites is unlikely to meet those conditions.


1. Grab as many viewers as you can

2. Push as many links for subscription to "legit" porn site (those 5-7 minutes video with a "view more / view the full version here")

3. Take a % on the people that register through that

That and ads are the main income source. The percentage of people who end up subscribing is very small, but then again a dozen bucks buys you a lot of bandwith these days.


Perhaps the worst thing about this is that engineers at the browser makers are now incentivized to create Zero-days. $250k and rising for knowingly creating a backdoor? There's now a market for software engineer corruption. Maybe add in some middlemen...


Wouldn't it be fairly easy to get caught doing this? Especially if you did it multiple times, it seems obvious exploits in code you checked in would be traced back to the malicious engineer.


Very good point. Are there any indications that this is happening?


"Google typically offers a maximum of $3,133.70"

i believe they recently paid out 2x $60,000 prizes.


They did, but this was for prizes during Pwnium at CanSecWest , not everyday amounts up for grabs.


True but the reward is now up to US$20,000 in some cases.

http://www.google.com/about/company/rewardprogram.html


I should also note that that's $20k for a bug or series of bugs, not an exploit. Considering the monumental effort required to write a stable, viable exploit, getting 1/5 of the money (in theory) could well be worth it.


I know it was inevitable, but I find it rather disappointing that exploits have become such big business. I felt like the hacker/cracker community between the late-80s to mid-90s was just that: a community, which likely developed naturally because of the scarcity of information at the time. Sure, there were exploits being traded (and probably even sold occasionally), but I never got the impression that anyone treated it like a business.

Admittedly, I'm probably just being sentimental about my childhood, but that's how I remember it.


It has changed a lot in the last 20 years. Now organized crime as well as the involvement of nation-states has made it far more lucrative, and conversely, dangerous. Kingpin (kingpin.cc) does a good job of describing how much money there is to be made, even when you're not a PhD.


It's a pity the article only mentions client-side exploits, it would have been interesting to see what is paid for server-side zero-days, especially linux/LAMP related...


Consider the possibility that governments can create their own exploits. If they have a large quantity of server side bugs the marginal utility of one more is effectively 0. It is safe to assume that they have existing capabilities in that area. Just mentioning a LAMP stack means SQLi as the most likely vector. No point in paying for someone to run sqlmap for you... ;)


The article says that this practice is legal. If the sale is made, and then the client uses it to do something illegal, is the hacker / Grugq free of liability?


I would guess yes. The sale of firearms is legal, but if a buyer then goes and kills someone with the gun, the seller of the gun is not liable. Not an exact comparison but for legal comparisons I believe it's apt.


On the other hand, selling firearms to people without doing a background check or when you know they're mentally ill or any such thing can be illegal (in some places, at least; I of course can't be familiar with legislation everywhere). I expect similar legislation may be introduced in the 'exploit market'.


But there are noble and perfectly legal ways to use a gun. But there are not many for 0 days.


Jailbreaking, test cases for hardening your own systems (c.f. Metasploit), opening appliances/devices to other analysis.

Three very common cases. Would quarter million dollar exploits be used for these? Probably not, but it doesn't change the fact that there are legit reasons to buy, sell, and use zero-day vulnerabilities.


The prices that go on in these markets make any of those reasons fall pretty blatantly on their face. Just like someone wanting to buy 100 AK-47s is also certainly not going to use them to just take to the range.


Cydia (the gray market app store) generates over a million dollars a year in revenue. The operation of this store, and thus its revenue stream, is entirely dependent on jailbroken iOS devices. Thus, there is a business entity with a existential interest in iOS exploits that are easily available to the iOS using community (i.e. the public). Would Cydia pay a quarter million dollars for an exploit to ensure that their customer base continues to exist? (Disclaimer: I'm not affiliated with Cydia in anyway, that revenue figure is from an ex-Apple employee discussing an informal estimate.)


Noble? Not necessarily. Perfectly legal? The article talks about government linked buyers. If the behavior associated with using the exploit is technically illegal why would that ever stop a large government from using and deploying it?


Hacker sells to a broker in South Africa. AFAIK, SA is not black listed or anything, and who knows who the broker sells too or who the broker says he sells to. I don't see how the hacker is any more liable than a petrol sales assistant selling fuel to a drug courier.

But then the law is a strange thing.


Depends what country they are in and if that country actually enforces their own law. Thailand or one of its neighbors is probably a good place to live until any legal uncertainties are sorted out.


It's important to consider that Grugq is selective about his buyers. He won't broker a deal with Syria. As long as he sells to friends of the World Police, prosecution is not really a risk.


I think he is free, however with all the fear-mongering how long before calls to regulate this appear?


Regulate under what jurisdiction?


Perhaps 'perceived' public safety?


Another article that reveals nothing new and only serves to help elevate the public's fear that a full scale cyber war is imminent...


The hacker should accept bitcoins for the exploit and save the commission fee and maintain better anonymity.

Also the buyer could stay anonymous too.


Linux isn't on the list, so I'm safe! :P


why would iphone be harder to hack than android? is it because iphone software is more closed source?


Is there some aspect of this story that is new? The article was published three months ago.


Linux is not interesting for the buyers or not that easy to hack?


Probably has to do with servers being locked up pretty tight and desktop linux not having a large enough market share.


A lot of existing capability combined with minimal utility (Linux market share is a rounding error). There is basically no interest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: