The Intercept checked some claims themselves: "Testing performed by The Intercept last month on a trial copy of “Kaspersky Small Business Security 4” determined that, while some traffic was indeed encrypted, a detailed report of the host’s hardware configuration and installed software was relayed back to Kaspersky entirely unencrypted. By the time of publication, Kaspersky told The Intercept via email, it was unable to reproduce these results."
Kaspersky traffic being easy-ish to intercept may be a feature, not a bug, considering Kaspersky's ties to the Putin regime. Some details here: http://www.wired.com/2012/07/ff_kaspersky/
I think AV is a tricky thing as it gives ring 0 (or equivalant) access to workstations and has its fingers in pretty much everything. Intelligence agencies probably see it as a huge win if they can commandeer one without anyone knowing.
Shame FOSS AV never really took off. I wouldn't be surprised if all AV had some relationship with their host government.
Is there a model for giving the maintainers a larger incentive to be honest, than the incentive that could be provided by a bad player?
To restate - Many FOSS maintainers live a ramen lifestyle. If someone either A. wanted a zero day exploit, or B. wanted a virus to be ignored, they might be able to pay the maintainer enough to violate ethics. Is there a way to incentivize maintainers enough to counter this risk?
> Many FOSS maintainers live a ramen lifestyle. If someone either A. wanted
> a zero day exploit, or B. wanted a virus to be ignored, they might be able
> to pay the maintainer enough to violate ethics. Is there a way to
> incentivize maintainers enough to counter this risk?
Weirdly, people living on ramen lifestyles are often the most difficult to corrupt with money. There’s a reason they aren’t using their intelligence to trade financial instruments or writing programs to do it for them.
The most vulnerable to corruption are often those with middle-class lifestyles who are living above their means and stretched to make ends meet. They often value the appearance of a comfortable lifestyle, and once you start playing golf with Alice, drinking fine wines with Bob, and horseback riding with Carol, you are in a deep financial hole.
Worse, it’s really all about the social aspects for such people, so the idea of a tight fiscal diet to regain control of their finances is anathema to them. They can’t suffer the shame of having to admit to their social circle that they don’t belong.
It’s easy for authorities to target those running up debts.
The irony is, sometimes everyone in the same social circle are all in over their head. They’re all suffering from a kind of impostor syndrome, each of them thinks that everyone else can afford the Suburban in the driveway and the jet-ski in the garage, and that they’re the only one in trouble. In reality they’re all in a Red Queen’s Race.
1. Rewards: Provide good rewards for honest maintainers. Pay them out of government grants or get private funds to pay them. Give maintainers respect/status, community, and money. Tie the security of a product to this funding process. Rewards for no independently discovered remote exploits for X months, etc. Rewards for independent users finding and disclosing exploits.
2. Punishments: Intentional inclusion of security holes should be treated as a criminal act. FOSS maintainers who break the trust which is placed in them should be viewed as untrustworthy and anti-social. Have very clear red lines and very clear consequences. Pass laws which make it illegal for any actor , including the US government to petition a vendor to include security holes and backdoors.
3. Technical restraints: Have a two-person rule for commits. Open code reviews and cryptographically attestable append only revision systems with signed commits. Forbid coding styles that make it easy to obfuscate code. Fund independent code audits like the audit truecrypt project. Ensure the code which is published is the same as the code that was audited. Require deterministic builds.
Not only are AV software running privileged services in the operating system, they're also full of security holes in themselves.
I seem to recall a talk at Black Hat, but I can't seem to find it now, where someone did some basic file format fuzzing and could reliably run code in almost all of the popular AV software.
That's scary. I would also target them if I was a black hat or a three letter agency (which basically amounts to the same in this context).
The article confusingly conflates several very different things:
1. Reverse engineering popular antivirus software so the agencies can see what data it sends home.
2. Monitoring the emails of cybersecurity companies.
The first of those is totally fine. It's the agencies' whole mission. The question is not whether they're allowed to reverse engineer software, but who/what they start spying on using the things they learn. If they're spying on Al Qaeda, I don't care if they're using antivirus vulnerabilities, OS zero-days, or a wiretap at the ISP. Seems OK. The choice of technology doesn't seem relevant. And if they're spying on teenagers sending naughty pics to each other, it's equally as not OK no matter which of those technologies is being used.
The second set of behavior is far more worrying though. Our government shouldn't be spying on the internal communications of law abiding companies.
Eh? Both of them are just as "bad" or "good" depending on who exactly they've been tracking, and what capabilities one thinks an intelligence organization in the 21st century should have.
From reading the published slides it was actually a pretty nifty operation. They've tracked data from security vendors to learn if their tools are being detected, and more importantly getting information about operations which are being run by other organizations and nations.
For the past decade or so security vendors mostly non-US one's have detected and published research about operations that were run by the US possibly by the NSA, a smart move would be to gather information about those companies to understand what and how they discover such operations.
Also since the NSA and similar organizations have better knowledge of how such operations work in the 1st place they most likely got better chance of detecting new malware which might not be picked up or properly classified by your average security vendor/solution.
So again not really a surprise, if you are going to invest 100's of millions of dollars in developing an entire operational trade craft consisting of specialized software and it's entire supportive echo system you would monitor organizations that can blow your operation apart, and already have done so in the past.
When Kaspersky finds the latest version of some RAT used by the NSA or the Israeli NSU or by who ever they blow a huge operation, and again nothing in this is to say that Kaspersky is bad or that they support terrorism, or that they shouldn't release such information, but you can't expect the organizations on the other side of that story to not do anything about it.
The first bit still needs oversight rather than the security agency having carte blanche.
To some extent it's a positive thing that it was a warrant asking a court for continued permission that was leaked. It suggests that, at least to some extent, GCHQ are trying to be above board.
Whether or not the court is acting in a way that balances privacy and security is another matter. I don't know enough to have a firm view.
> 1. Reverse engineering popular antivirus software so the agencies can see what data it sends home.
How is this totally fine? The DMCA in the USA prevents this sort of thing, look at Russian programmer Dmitry Sklyarov, he was jailed for several weeks and detained for five months in the United States. I'm not a big believer in "intellectual property", but if the act of invention is sacred, and confers property rights on the inventor, then spy agencies need to respect that. Otherwise, either (a) the DMCA is an instrument of oppression or (b) the spy agencies are instruments of oppression, or (c) both (a) and (b).
I'm not sure how to interpret your remark. Generally, we want people to obey the law for a number of reasons, like their own safety, safety of others, morality, protecting rights of minorities (or even majorities), the ability to speak out about wrongs or injustices. Generally, we have laws that we hope encourage us to do good things, and discourage us from doing bad things.
Shouldn't spy agencies do good things, and avoid doing bad things? Letting spy agencies not be accountable for bad behavior seems like a policy that won't work out too well, that will lead to contradictions like eating meat, but hating the butcher for killing animals.
That is, if freedom from being spied on is a good policy for US citizens, it seems like it's a good policy for everyone, regardless of citizenship. The opposite policy (no spying on citizens, but spying on everyone else) ends up making the lack of surveillance a temporary privilege, to be revoked by someone if and when the policy becomes inconvenient.
It wasn't a normative statement. Rather, it was intended to point out what you just did: if you want spy agencies to follow the same laws everyone else does, you're effectively arguing that there should be no spy agencies.
The DMCA outlaws bypassing copy protection schemes and access controls. It explicitly _exempts_ (ie, does not make illegal) bypassing copy protection for legal reverse engineering activity, including security research.
Read the DMCA. It's surprisingly readable. It does not outlaw reverse engineering.
I'm not really surprised that the NSA would reverse engineer AV software, this seems like a no brainer article that I would expect from techcrunch.
The most surprising thing to see in the slides is 'DNS Interdiction' which isn't mentioned in the article at all. It seems like they are compromising the DNS system to have the request sent to the wrong location so they can log it. This probably leaks all sort of information about your AV set up.
This is like the government randomly placing dynamite in a very small percentage of fire extinguishers. The overwhelming majority of people will not be affected by this, but it shakes confidence in an important safety item and may encourage large numbers of people to stop using AV software, to the detriment of their own computers and networks in general.
This "leak" seems to be a way to hit at Kaspersky. It seems like Western governments are really out to get them eh? It's like these bozos aren't in our pocket so let's try to destroy their credibility in hopes that companies will switch to Western companies for their security products whom we can then control using security letters etc.
You might want to look carefully at the history of Kapersky before you cast them into a white knight role. Would you blindly trust a company with known KGB and FSB ties?
No one's a saint, everyone's a sinner. We're casting the spotlight a certain way. I am sure it works equally well the other way. The point being there should be choice, not unilateralism like the Five Eyes want. It's all business, and never nothing personal. If you don't want Russia to have your secrets because they have something to gain, then by all means avoid the Russians. But it's for sure the case that Americans and the British will take your secrets and give them to their own industries for competitive advantage... while wagging their finger at everyone else for doing the same.
That's not entirely true. Even if I behave completely above-board, it's possible that some site I visit has been compromised by hackers. Without some kind of protection, I could then be damaged by malware.
By detecting some typical exploit pattern, the exploit kit itself, the malware the exploit eventually ends up downloading and executing, or even the malicious host itself. There might be other ways too, but those at least are the most typical ways.
An antivirus definition file is a lot easier to update than a browser component that is exploited. The latter generally involves a lot more testing, whereas the former is essentially just metadata.
It's scary how much information antivirus software nowadays send home every day. It's often marketed as "cloud-based heuristics" or something similarly nebulous. What it really means is that the antivirus software will send home a sample of anything it's not sure about. It could be a confidential document of yours. It could contain your personal information. Oh, and don't forget that MITMing your TLS connections is now a highly desirable feature. A lot of them have Superfish built right in!
In the past, if you bought an antivirus software, that was it. The antivirus would download new signatures at regular intervals, and not bug you until your subscription expired. Nowadays, every antivirus requires you to be logged in to your account all the time, ostensibly to ensure that your subscription is valid, but also in order to allow anyone with access to the web-based control panel to trigger all kinds of scary actions. Every vendor also tries to upsell at every opportunity, sometimes even after you've paid.
I tried to stick with Microsoft Security Essentials for the longest time, because if I had to open a backdoor to somebody, I might as well trust the company that wrote my OS in the first place. Also, MSE was pretty good when it first came out. But its detections rates have steadily gone down, and a number of people to whom I recommended it have gotten viruses that MSE couldn't catch. I use BitDefender on my Windows boxes now. It's the only antivirus I could find so far (apart from MSE) that doesn't constantly nag me to upgrade, but who knows? I might have just opened a backdoor for the Romanian government or something.
Meanwhile, how many NSA backdoors are there in US-made AV software? Note that no reverse-engineering effort targeting such companies was described in the memo.
I saw that too, in the page of AV company logos. All from outside the USA.
As one of my kids pointed out to me last week, you really need to trust your locksmith. It would appear that you really need to trust your AV vendor, too.
US companies or companies with a large stake in the US market will respond to national security letters which means It's easier to get them to cooperate.
So are taxes whats your point? Companies that want to work in the US need to comply with US laws.
Everything in the world is pretty much done through compulsion of one sort or another.
I'm pretty sure that Kaspersky works for the Russian intelligence services (Eugene is an IKCI (FSB Technical Academy) alumni), and you can be sure that Checkpoint and other Israeli companies work with Israeli intelligence, and same goes for pretty much any other security vendor out there, if you are big enough to matter you will be in bed with some one.
Some of L0pht (which was a community, which might be described now as a hackerspace/collective centered in Boston) became a security consulting shop @stake (atstake), which was the sold to B4 PwC and then sold to IBM (perhaps integrated into GBS, the consulting svcs arm that tends to accumulate exceptionally smart people.)
It's most likely the case that all western AV companies readily respond to National Security Letters and/or Warrants - providing the NSA/GCHQ with all the information that they need. Kaspersky on the other hand, is probably more difficult for the NSA/GCHQ to secretly coerce, hence they need to exploit their software.
The Intercept: "Popular Security Software Came Under Relentless NSA and GCHQ Attacks"
https://firstlook.org/theintercept/2015/06/22/nsa-gchq-targe...
The Intercept checked some claims themselves: "Testing performed by The Intercept last month on a trial copy of “Kaspersky Small Business Security 4” determined that, while some traffic was indeed encrypted, a detailed report of the host’s hardware configuration and installed software was relayed back to Kaspersky entirely unencrypted. By the time of publication, Kaspersky told The Intercept via email, it was unable to reproduce these results."