Without wanting to start a political flame war it would be great if there was consistency to how we in the tech community and the media treat these types of vulnerabilities. When Huawei have these sorts of bugs they are reported as backdoors. Bugs happen in software be nice if put the nationalism aside and reported it consistently as bugs or vulnerabilities
Also you should have ACLs in place and VLAN segmentation (assuming their use as pure layer 2 devices) so that only certain authorized sections of the network are even able to reach things like the management ssh and SNMP daemons.
I heard https://nvd.nist.gov/vuln/detail/CVE-2019-1804 is Cisco's ninth backdoor so far this year. Not ninth security problem total, ninth backdoor. The ninth security problem Cisco shipped intentionally.
Meanwhile, the router that serves my office is from a company that's had fewer than nine security problems in the past ten years. Two, I think, but I confess I don't really keep count (ditto the nine above). The precise number doesn't matter, because
1. If you want to be cynic you can point out that 9>0 and 2>0 and really they all suck.
2. And if you don't want to be cynic, then Cisco's recent record is in a league of its own. Steals the show. Makes other people's CVE count look like rounding errors.
> Not ninth security problem total, ninth backdoor. The ninth security problem Cisco shipped intentionally.
How can you be sure it was intentional?
> Meanwhile, the router that serves my office is from a company that's had fewer than nine security problems in the past ten years.
How can you be sure? Did you audit all the source code yourself? Did you compile the source code yourself and are you running only binaries you compiled? Are you sure you can trust the compiler you used?
Or, are you assuming that, because there isn't a CVE, there isn't a vulnerability or security problem?
Fewer CVEs doesn't necessarily mean more secure, it may just mean less validation/testing etc.
But sure, it could mean more secure, it's just not a guarantee.
Someone at Cisco intentionally created a keypair and intentionally put it in the image build process. They may or may not have intended to put it in production builds, but they clearly intended to set it up in some form, when they could have just ... not. If you take the easy but risky approach, you have certainly intentionally put yourself at risk.
I've worked for a company that built OS images for distribution to customers. Putting my SSH key in development image builds would have been convenient, but there was too much of a risk of exactly this problem; instead we just made it easy enough to download an SSH key on a development build (and start up an sshd) once you've booted it and have physical access to a terminal.
Also, a practical concern with disclosed vulnerabilities is that non-nation-state attackers (which are most of the attackers most people care about) are very unlikely to find and exploit a vulnerability that neither has a public CVE issued now nor will have one issued for years. So even if the alternative vendor has difficult-to-discover vulnerabilities, there is, in a very real sense, reduced exposure from those vulnerabilities compared to things that are disclosed and fixed. And especially if Cisco's disclosed-and-fixed vulnerabilities originate from outside vulnerability reports, there's a definite correlation between whether a vulnerability can be found by someone who would report it and whether a vulnerability can be found by someone who would exploit it.
Backdoors aren't bugs like most others. Buffer overflows happen because someone mistypes or forgets a length check, etc.
Backdoors are unusual: They happen because someone writes code of the form addAccount("s3kr3e", "s3kr3t"), and that's code that's written. You can typo and accidentally omit a bounds check, but you can't typo and accidentally end up with a valid SSH key pair and code that installs it.
It's possible to ship that SSH key pair and the code to customers that as a bug, e.g. if someone writes that code on purpose, intending to add and deploy s3kr3t/s3kr3t but not intending to ever have that code on the branch that's deployed to regular customers, and then someone else mismerged. In that case serving it to customer X is due to a bug, it should only have gone to customer Y or test environment Z. What I'm saying is that shipping those backdoors at all must have been intentional.
(Personally I think shipping backdoors to test environments is fine. Including test environments at customers. Risky.)
> Backdoors aren't bugs like most others. Buffer overflows happen because someone mistypes or forgets a length check, etc.
> Backdoors are unusual: They happen because someone writes code of the form addAccount("s3kr3e", "s3kr3t"), and that's code that's written. You can typo and accidentally omit a bounds check, but you can't typo and accidentally end up with a valid SSH key pair and code that installs it.
Not sure if you're intentionally trolling but a backdoor is simply some code which bypasses security that a particular person knows about. It does not have to be obvious. The ones that have plausible deniability are the better ones as that is considered a feature. That way the company can say "oops, we made a mistake".
To say that a backdoor must be obvious is absolute nonsense particularly for closed sourced binaries where disassembly and using simple tools like https://en.wikipedia.org/wiki/Strings_(Unix) would reveal the presence of such backdoor.
> (Personally I think shipping backdoors to test environments is fine. Including test environments at customers. Risky.)
It depends. Unless that "test build" specifically has an option that disables all "test related backdoors" then the answer is no. You cannot risk having something slip through to a production build.
As the previous poster said:
> I've worked for a company that built OS images for distribution to customers. Putting my SSH key in development image builds would have been convenient, but there was too much of a risk of exactly this problem; instead we just made it easy enough to download an SSH key on a development build (and start up an sshd) once you've booted it and have physical access to a terminal.
Another very common solution would be a template where at build time the keys are generated/imported by whatever build system is being used.
That way if something unintended happens or is "forgotten about", the build simply won't have a key at all and therefore will not work, rather than having a set of keys that are the same across all production builds.
Mikrotik is okay for small routers, and so is Ubiquiti. If you get Mikrotiks, look for ones with angly metal boxes, not curvy plastic ones. And avoid the GUI stuff, use the CLI. If you're looking for larger routers I'd look at Juniper first.
All of those will give you hardware that does the job and stays up, and provide uncomplicated upgrades for many years.
Palo Alto (like every vendor) has had similar vulnerabilities in the past with their web management. Typically management of a switch/firewall isn't exposed to directly to the Internet.
This is a pretty egregiously editorialized title; what we know is that there's apparently an SSH keypair authorized on these devices, for which the private key is available on the device. That's a terrible, ugly vulnerability, but it's as likely due to stupidity as to malice.
The right title is something like: CVS-2019-1804: Cisco Nexus 9000 Switches Allow SSH As Root.
Well Cisco wrote the code so it has to be in some way intentional but it doesn't necessarily mean it was done maliciously though. It could a private developer key used for testing accidentally got pushed out in production code or some poorly thought out management "feature". Regardless it is an epically dumb mistake for a company like Cisco to make on an enterprise product.
> Well Cisco wrote the code so it has to be in some way intentional but it doesn't necessarily mean it was done maliciously though. It could a private developer key used for testing accidentally got pushed out in production code or some poorly thought out management "feature". Regardless it is an epically dumb mistake for a company like Cisco to make on an enterprise product.
That someone might not be the company, it might be a developer.
It's entirely true that the company says it's not a backdoor, the developer says it's a mistake, but he/she was approached from an external organization.
Unless you can provide either way it's impossible to classify it as a backdoor or not.
And what do you think this was? Virtually all router/networking devices have some kind of "hardcoded account" (read:backdoor) and this is only slowly changing. I believe the EU is going to ban the practice soon.
I'm hypothesizing that you might do this, even with keys intended to be used as a back door, by shipping it on devices, you vastly increase the number of potential suspects for any backdoor abuse.
Continuing this train of thought - a hardcoded password is classic example of a backdoor, and just as "public" as including a private key.
It is impossible to know the motivation of the person who put this here but these constructs have no place in firmware for critical devices and Cisco should have known that for a long time already. Either they truly are idiots or this is malicious.
Just stop. Is it really hard to understand that normal people can make mistakes without being malicious or incompetent? Imperfect and incompetent are not synonyms.
And my response to the OP was to push back on the idea that only "idiots" could make mistakes. To me that is an absurdly reductionist view of human nature.
You tell me stop and then proceed to ask a question? How incredibly rude. I suggest you rethink your philosophy of discourse, and words (hint: most qualities such as incompetence and maliciousness exist on a continuum).
Huh? You were labeling people as incompetent and supporting the OP's idea they were idiots. I was asking for empathy and understanding that people can make mistakes. And when I tell you to stop, I'm the rude one?
Yeah, telling somebody to cease communication and then asking them a question is absolutely rude. I think we're all incompetent to varying degrees in different domains, I don't think it's rude to express that. From my perspective I don't feel I was rude at all in this exchange until my last message with the snarky "hint" part, but I was okay with that since you had essentially just told me to shut up.
You are misinterpreting my very terse "Just stop". If I were to expand it: "Just stop trying to convince me that people who make 'mistakes' can only be incompetent, malicious, or idiots"
That is not the same thing as "Just stop communicating" or "shut up". I shouldn't have been so terse. Without the verbal cues you made a different assumption about what I was trying to say.
In this context I don't really see a difference between "stop making your point about the subject we've been communicating about" and "stop communicating in general." Am I supposed to talk about the weather, or engage in a lengthy meta-discussion about talking about the subject we've been talking about?
I never used the term idiots, that's you putting someone else's words in my mouth.
It's hard for me to imagine how a person can make a mistake in a given domain without being at least bit incompetent in it, hence my point about competence/incompetence existing on a continuum.
Edit: ...and if the person were malicious, it wouldn't be a mistake to begin with.
So people extremely knowledgeable in a domain don't make mistakes in that domain? Or are they just "one notch too low" on the continuum- enough so that they make a mistake? I just don't think that holds.
Mistakes aren't always made due to incompetency and extremely competent people still can make mistakes.
Well, I do apologize for my sloppy writing that made you think I was telling you to shut up. That wasn't my intent. I was just trying to say that your argument wasn't persuasive to me.
We'll just have to agree to disagree regarding human nature.
The mods asked me to email comments like this to the hn@yc.c address in the footer (Contact link), and have been responsive (not necessarily agreed, but they do reply!) when I've done so. I emailed them a link to your comment as the edit request with an attempt of my own:
> CVS-2019-1804: Cisco Nexus 9000 remote root exploit via SSH-over-IPv6
And plausible deniability is the #1 rule when being malicious. If you know enough to use an asymmetric key instead of a password, but not enough to think it's a good idea to leave the private key there, you're in a weird cross-section of expertise.
But any competently inserted intentional backdoor is going to be indistinguishable from a mistake.
If Cisco had some SecretFBIChinaBackdoor() function somewhere the backlash would be way way worse (or at least an unknown). Whereas at this point it's abundantly clear that serious "non intentional" security vulnerabilities in networking hardware basically go ignored by the market.
You're not forced to make an assumption, you can just be honest and say you don't know. There's too many comments in this thread effectively saying "it looks unintentional, so it's unintentional".
Yes, that is the most commonly used to describe an “undocumented” access credential, regardless of why. This has been used for credentials that were added and forgotten, credentials that were added to permit unauthorized access later, and credentials that were added to permit authorized repairs more readily. These credentials were included in an “undocumented” (or “unpublished” might be more precise) manner and can be used to bypass security, so “backdoor” is correct.
This of course says nothing about whether its inclusion was due to intent, incompetence, and/or malice. (If the private key includes “Comment: hack the planet” then yeah it’s malice :)
I don't own nor have I read the manual of one of these, and there's not much in the way of details on that page, but isn't this more like "use the factory-supplied default key to get in for the first time, then change it to your own"?
can someone help me understand this better.. Did Cisco leave a user public key in the switch and the private key has leaked ? To exploit this vulnerability attacker has to get hold of that private key ?
The keypair is essentially some default known value.
You shouldn't be able to use this to connect at all, but apparently works over IPv6.
So you'd have to have the private key, as well as knowing the IPv6 address of the device you're connecting to, and that device would have to have a route to the internet or a location you could connect to it from.
There is one bright side to otherwise disgraceful incidents: All the customers running older versions are now forced to upgrade to the latest versions. The burden of supporting really old versions suddenly vanishes.
Box vendors should really stop selling unmanaged boxes/solutions. In reality, customers end up buying service contracts anyway along with boxes. Instead, sell usage/service/connectivity and manage the hardware. A critical patch like this one could then be applied before a PSIRT is released. Frequent upgrades(security patches or feature/bug fix patches) are now commonplace. The user experience would be so much better if the solution were managed by the vendor (cloud managed).
Most places (especially where they have enough money to be buying Cisco Nexus 9k kit) will want some sort of change management, not the vendor to be making arbitrary changes to their critical infrastructure.
Also, given the number and severity of these sort of vulnerabilities in recent times, do you want to give the same companies remote access to your infrastructure as well? :)
As a former Cisco employee, I can tell you why companies never want to open source their security-sensitive products:
Pros of open sourcing a product:
- fewer total number of vulnerabilities
Cons of open sourcing a product:
- more publicly-known vulnerabilities
- less effort required to find new vulnerabilities
The product might be more objectively secure, with more bug reports and more fixes.
But it will be less practically secure. There will be more known vulnerabilities, and many customers can't upgrade, leaving more total vulnerable customers. And worse, now anyone on the internet can try and find new vulnerabilities for $0, while before they'd need to buy a $1,000+ piece of hardware to even get a shot at the compiled code.
The real defense against this problem is security auditing. Security engineers try to hack the device while asking a bunch of questions about SSH connections and private keys. This is the technique most companies employ, often combined with bug bounties.
Really what you're saying is that if you open source your product the degree of shittyness will be then obvious to everyone, and that security by obscurity keeps the managers happy because they think there is less work.
For most enterprise IT products there is no "open down to the hardware" replacement and there never will be because there isn't a business model to create it.
And they completed it successfully. They literally had a new backdoor every month for years now. No company is that incompetent unless ordered to be so.
A Nexus 9K is an expensive piece of kit, and is not a trivial switch to deploy what with VPC and other configurations being commonplace, so just powering it on will not deliver a workable product. I suspect most if not all deployments follow best practice and have a management VLAN with access lists control limiting the source address of the connecting client, and blocking access to port 22 from other networks.
Because the anti Huawei talk is obviously not in people's interest. Just look how zero politicians worldwide lament that Cisco should be banned from anything related to internet infrastructure despite showing again and again that they are unwilling to stop implementing backdoors. Cisco makes it obvious that backdoors are A OK as long as it's our backdoors. No one acting against Huawei actually cares about peoples security.