Hacker News new | past | comments | ask | show | jobs | submit login
Duplicate SSH Keys Everywhere (shodan.io)
310 points by achillean on Feb 17, 2015 | hide | past | favorite | 84 comments



Related: if you don't have enough entropy when creating keys, your public keys might not be identical, but they can share primes. Sharing primes allows this cool factoring attack by DJB and others.

https://scholar.google.com/scholar?cluster=92037504584829650...

This paper explains how an attacker can efficiently factor 184 distinct RSA keys out of more than two million 1024-bit RSA keys downloaded from Taiwan's national "Citizen Digital Certificate" database.

These keys were generated by government-issued smart cards that have built-in hardware random-number generators and that are advertised as having passed FIPS 140-2 Level 2 certification. These 184 keys include 103 keys that share primes and that are efficiently factored by a batch-GCD computation. This is the same type of computation that was used last year by two independent teams (USENIX Security 2012: Heninger, Durumeric, Wustrow, Halderman; Crypto 2012: Lenstra, Hughes, Augier, Bos, Kleinjung, Wachter) to factor tens of thousands of cryptographic keys on the Internet.


I almost didn't read the paper because I thought it was only about the batch-GCD method: that's fairly easy to understand. But I'm glad I read the paper because it goes on to explain how using the shared factors allowed them to understand the failure of the PRNG and use that understanding to break keys that didn't share factors, using a more sophisticated attack. It's definitely worth reading that paper.


Taiwan has a citizen digital certificate process - cool !

I mean that's a seriously big advance over anything I imagined a govt capable of. Estonia was the only other place I heard even close.

And is 183 out of 2 million so bad? Just sign the next round of certificates with this lot and move on.


Portugal also has a national ID card that's a smart card and includes a key. The last time I tried to use it in the justice ministry website it required accepting a self-signed HTTPS certificate. When I reported the issue I got a "oh, just accept it and it will be fine" response from a supposedly technical person.

So yeah, these things are getting deployed in some places but I'm not too hopeful it will end up being much more than a gimmick. At least I hope not because the next step on the minds of some of these "technical" people is electronic voting and that won't end well...


Italy started issuing smart cards around 2006. Some (not all, you need to request it and takes longer to have it) of the ID cards issued are electronic. They contain a RSA key pair and a X.509 certificate. There are some mandatory use cases, (eg: all documents or emails being transmitted to Governmental agencies must be digitally signed) and some optional ones (you can store fiscal documents as signed PDFs if you don't want to store them physically for ten years). Among regular citizens it has very little use. It has some quirks, but it's a step forward. To boost its usage, the EU should issue a standard and ask every country to adopt it.


Not every country in the EU has ID cards, introducing them in the UK has a lot of opposition.


And of course there's South Korea, where every financial transaction above a certain amount needs to be signed with an X.509 certificate. Back in the late 90s and early 00s, Korea was a pioneer in the public use of PKI.

But since no browser makes it easy to sign anything with a certificate stored on disk (or in a USB stick), this policy has led to serious dependence on ActiveX plugins (and therefore IE and Windows) that everyone has probably heard about.

Now the Koreans are trying to fix that, but since the government is just as boneheaded as ever, the officially mandated "cross-browser" solution is just a Windows binary that installs NPAPI and PPAPI plugins into every browser. Some variants even work on Linux, but only because they're implemented as Java applets.

PKI is cool in theory. In reality, it's hard to implement it in a way that is accessible to the general public.


In Kazakhstan it's achieved by using signed Java applet. It works well in Windows, OS X and Linux. In Linux there are some problems with smart card readers, but generally things work well. I don't like Java applets, but they are probably better than ActiveX.


There's similar problem in Kazakhstan. egov.kz is their e-government site. They issued their own root certificate and they use it to sign all users certificates. And they signed their SSL certificate with that certificate which causes it to show alarm.

Not very smart move if you ask me. If you teach users to ignore HTTPS errors, then MITM attack becomes easier because user don't see anything suspicious.


It's a dumb move from browser makers (self-signed certificates really shouldn't be the same kind of error as outright fraud). Maybe they can get the Kazakh root certificate incorporated into major browsers? But until then what other approach is possible to bootstrap it? There are obvious reasons the Kazakh government infrastructure wouldn't want to be dependent on e.g. Verisign.


>They issued their own root certificate and they use it to sign all users certificates.

In our case I think we even have our own CA trusted by the browsers so there's no apparent reason for the self-signing.

>If you teach users to ignore HTTPS errors, then MITM attack becomes easier because user don't see anything suspicious.

That's precisely the point I made to the "technical" person and it was ignored...


In Norway there was/is a project that AFAIK originated around the National Lottery's need/desire for smart-card backed digital transactions: https://www.buypass.no/bruker/buypass-id/buypass-smartkort

It's a little unclear to me what the current status is: apparently it's valid for the National (Tax etc) Portal (altinn.no - literal translation: (hand)ineverything.no)). The basic idea was that the Postal service already allow the sending/delivery of registered letters -- so why not just use that for validating the ID. These days mail is largely handled by supermarkets -- and my impression is that the ID check isn't as reliable as it used to be when the postal service operated as a government monopoly.

Anyway, AFAIK, it's still the closest thing we have to working, secure digital ID in Norway.


> Portugal also has a national ID card that's a smart card and includes a key.

German ones do something similar since late 2010. The card can be used for online authentication and eletronic signatures (requires certificate that is separately available).

http://en.wikipedia.org/wiki/German_identity_card#Chip


> Portugal also has a national ID card that's a smart card and includes a key.

So does Finland (since 2004).

But under 10% of the population has such a card (most people use drivers licence and/or passport as ID), and thus almost no private web services accept it for identification.



Don't most countries have ePassports?


Yes, 183 out of 2 million is so bad, because apart from anything else it indicates that the output of the random number generators used had far, far less entropy than it should.

(How can you use a completely compromised private key as a root of trust for another certificate, anyway?)


Sweden also has a digital certificate ID, but it was developed by the major banks instead of the government (with the government originally only providing a legal framework to allow digital signatures to be legally valid - BankID is now the primary method to file taxes and manage government services online) http://www.bankid.com/en/What-is-BankID/


I'm not sure why this is a good thing. An advance would be for not requiring any certification, and not having a single point of failure, despite the tech, as tech will fail, and be subverted before it does [reference: 5000 years of modern history]. A nation without 'citizen' certification system is, to me, cool.


This is why we can't have nice things. And by nice things, I mean identification of people via cryptography rather than rectangular pieces of plastic.


> rather than rectangular pieces of plastic

Finally, the U.S. gets a mention.


This is not really news, this was already discovered by Nadia Heninger and others in a paper in 2012. From their faq about their research: "We found significant numbers of insecure RSA and DSA public keys in use on the Internet: we found that 5% of HTTPS hosts and nearly 10% of SSH hosts shared keys for reasons we considered a vulnerability." See here: https://factorable.net/faq.html Mostly embedded devices that auto-generate their key on bootup. There was a pretty interesting talk at ccc where part of this research is mentioned: http://media.ccc.de/browse/congress/2012/29c3-5275-en-factha...


This also applies to those of you building "template" VMs in VMWare or Xen -- before you save that template please delete the SSH keys in /etc/sshd/ so they are generated on first boot of the deployed template!


It would not surprise me if there are other bits of unique info on the template VM that need to be cleared before freezing the image, e.g. hostnames.

This is a solved problem in Windows. The sysprep tool wipes out the hostname, security keys, user accounts and a bunch of other things one can configure, so the image initializes properly the first time it's booted.

I am curious if there are similar tools for Linux & Unix that the image builders simply failed to use here, or if this points to an opportunity to build a useful tool.


There's virt-sysprep [1] and sys-unconfig [2].

[1] http://libguestfs.org/virt-sysprep.1.html

[2] http://linux.die.net/man/8/sys-unconfig



+1


virt-sysprep from libguestfs does this.

http://libguestfs.org/virt-sysprep.1.html


in VMware, you use Customization Specifications to do the equivalent to sysprep for Linux (and sysprep on Windows VMs)


With standard ssh-keys, you now have a new problem: how do you know the key of your new vm? Trust-on-first-use is one option -- but a better option would probably be to move to ssh server/user certificates, and generate and sign a new server cert outside the vm, as part of finalizing/readying it for deployment. Might not work well with elastic generation of vms based on load (not sure how well ssh deals with mulitple CAs -- one might imagine a structure with an ultimately trusted, possibly long-lived, off-line cert on the top, and the various sub-CAs organized based on security domains (eg: elastic vm, user auth etc)).

Still doesn't exist any sane, open automation for that, as far as I know. Maybe we should be happy people use keys at all, never mind if the trust/validation is flawed... (not to mention keys never expire, and can only be revoked via a black-list).


I've been thinking about adding server key synchronization (optional, of course!) across a Userify server group (cluster) but it seems even riskier than trust-on-first-use. What are your thoughts? Userify is definitely getting expiring keys soon (tracking when the key was last updated).


To be perfectly honest, as I understand what userify does (I'm not a user) -- the whole platform would benefit [ed:from] transition[ing] to ssh certificates[1]. While key synchronization is better than doing... nothing (which a lot of people do, including me, from time to time :-). I think key management for public key authentication is pretty much a solved problem: run your own CA (or delegate to someone, like Userify).

There are still real issues with managing root keys etc -- but with cert support in openssh (client and server) that provides a sane basis for trust management.

For Userify, the service (as opposed to keys, auth, servers and users: the problem) -- I think spending some time on a UI to manage certs/ca for ssh, documenting how to use it for user-keys/certs -- and offering to run a CA per Userify-account (ie: generate/manage the root CA-cert, allowing "instant security" by deploying the Userify-user's CA root cert on servers, and having a service to sign public-keys with that key) could be one very sensible approach.

There might be some added benefit with an "always on"-service, by issuing short-lived user certs (valid for, say 8 hours a la kerberos tickets) -- but for that to be useful, perhaps some work would be needed with forking putty for windows users and/or patching ssh-agent/ssh for nix/OS X users. The basic functionality is there, but the UI/UX is lacking. Note that a user can keep the secret key* for as long as he/she is comfortable -- it's only the signed public cert that needs to be updated (as the previous one expires).

Not sure if that answers your question, but those are some of my thoughts on the matter :-)

[1] http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man1/...


Excellent thoughts!! This is definitely the sort of problem we're trying to solve. Certificates get us part of the way there in terms of solving the authentication problem, and then we can continue to manage authorization separately.. something to think about.

btw, please feel free to become a user ;)


Please note that mixing standard ssh certs and something like userify should be a good mix. From the man-page above, we note that a cert (the signed public key of a user) can include principals -- that is user ids and hostnames (as well as origninatiing IPs) -- and the cert will only be valid for those combinations.

So if a service provides users with "session-tickets" (eg: certs valid for 8 hours, or 5 days etc) -- the servers need not call-back to userify. Authorization (for at least part of use-cases) becomes decentralized. Add an api for extending the validity of a cert (client does a lookup, userify checks policies, and if everything is ok, signs the key for a new validity-period) -- and everything should work very nicely indeed.

For those interested in ssh certs in general, see also:

https://github.com/cloudtools/ssh-ca

As for the userify shim, even though you've answered my question: https://github.com/userify/shim/issues/4 -- you might still want to include a license file, or something along the lines of: http://www.tarsnap.com/legal.html in the readme.

Might also want to have a free/open licensed spec of the api somewhere, so that if someone were to implement a self-hosting, free/open service that could be made api-compatible with userify, there'd be a clear way to do that legally.


> generated on first boot of the deployed template!

I've always been uncomfortable about that. How much boot time entropy can a VM have?



Depends on your OS :-)


I believe Vagrant takes care of that for you.


On the same topic, what are the other pitfalls of re-using OS images?

I can think of:

- Hostname conflicts

- SSH key duplication

- Driver issues if the hardware is not the same

Are there more?

This seems like a compelling argument for bootstrapping new instances with configuration management, rather than trying to re-use OS images.


Speaking of hostname conflicts: an ISP in Vietnam has configured all their devices to have a reverse DNS name of "localhost"

https://www.shodan.io/search?query=hostname%3Alocalhost


I've experienced this too - I seem to get a few people joining my gameserver from that ISP and I notice the DNS appeared as 'localhost'. Not ideal and I imagine it could break certain things. I always assumed it was a misconfigured internet cafe or something. 'Nice' to know it's a wider issue


MAC addresses. HP-UX saves them in its startup configs for some reason. A place I worked at used a sample build for its 'gold' build. All builds had the same MAC. Hilarity ensued.


MAC Address duplication on virtual interfaces for VMs started from a template.


No (mainstream) Hypervisor does this and you'd run into networking problems pretty quickly if any of those VMs share a network.



Never seen this in years of using VMs


An alternative to typical configuration management (puppet, ansible, etc) is making your own OS installer. I've adapted the Ubuntu installer to turn a blank machine into a running instance of our app in just a couple of days. The client just has to pop-in the CD or flash drive, answer a couple of questions, and go grab a cup of tea while it installs.


>in just a couple of days

Days?! That is a crazy-long deploy time.


No, I meant it took me a couple of days to make the installer :-)


.. here I thought you were talking about the normal speed of cloudformation to boot up EC2 instances..


You jest, but our previous "cloud" provider (which we used in 2012/13) took more than that to provision a bog standard Linux VPS - and they often got the distro wrong!


Even things like /etc/machine-id


shared inode number


Another implications of this is that if a security flaw is found in this re-used image, it's trvial to write a worm, as it can very easily seek out and find vulnerable hosts.

Ooops.


This is particularly problematic for virtual machines, which are starved of entropy (no sensors, audio device, etc.).

I have VM images begin without ssh host keys and have a service dependency on haveged: http://www.issihosts.com/haveged/ and http://security.stackexchange.com/questions/34523/is-it-appr...


For Qemu 1.3 and later, there is VirtIORand where you can have VMs attach to a HWRNG on the hypervisor, then food that into the kernel CSPRNG. This should be imaged by default, so on first boot, each VM has enough entropy for key generation.


On Ubuntu, pollinate solves this problem: http://blog.dustinkirkland.com/2014/02/random-seeds-in-ubunt...


Not my domain of expertise; can someone explain why this is problematic?


If you can read the private key from just one of these machines then you can impersonate any of them.


That is: MITM SSH connections to these devices without getting any warning. Of course you first have to get in a position to MITM the person who connects to these devices.


Also if the underlying cryptography isn't forward-secure, anyone with the private key can go back and read your recorded SSH sessions, including any kind of secrets that you typed into them (or files that you downloaded with scp or rsync). I don't know under what conditions SSH uses or doesn't use forward-secure key exchange methods.


SSH always (to the extent of my knowledge) uses Diffie–Hellman to generate session keys, and regenerates them at periodic intervals, and so always has forward-secrecy. This only affects the ability to MITM new connections.


That is great, thank you for the information.


I was surprised to learn that you can't man-in-the-middle an SSH connection if the client is using a key for authentication instead of a password:

http://www.gremwell.com/ssh-mitm-public-key-authentication

You'd still be able to impersonate the server, but that's less useful in the general case unless you can emulate the remote machine convincingly long enough to gather useful information.


I'm not sure what you say is true. On it's face, it doesn't make any sense, if you have completely MiTM's the network connection, and can pass along traffic from each part to the other, I don't see how client key auth would prevent MiTM.

Even the thing you link to actually says:

> The algorithm itself does not protect against active MITM attack, but it makes it impossible for MITM attacker to influence the choice of the shared key (and by extension the session ID) by the victims.

Does not protect against active MITM attack. It _does_ keep an attacker from influencing choice of shared key, but an active MITM attack is still in the middle and doesn't need to influence choice of shared key to mount many many kinds of attacks.


The pubkey authentication will fail because the signature is over the session key. If there's a MITM going on then each side will see a different session key. Check out the RFCs 4253 and the auth one (425?)


I'm confused by this. I know that HTTPS is supposed to be fixing this problem too supposedly. But I don't understand how. If an attacker fully replicates a SSH server, and responds to all client messages with the victim correctly, while then relaying their commands to the real SSH server, and acting as a full SSH client, isn't there still an issue?

The attacker just has to authenticate with the correct signature to each, but if it's an attacker, why would it just pass along the victim's data anyway? The whole point of an MITM is that you can modify data before it hits its target.


In the hypothetical scenario the attacker doesn't have the client's private key so it can't authenticate to the server. It can pass along the session key from the server but then it won't be able to read the data.


Ah, I get it now. It's not just the authentication step, it's the fact that the data is encrypted so only the original client can read it? That's right?


From the very first paragraph of that link: "Should ... an attacker manage to steal the private key ... the connection becomes vulnerable to active man-in-the-middle attacks".

Thus, all these machines with duplicate keys are VERY vulnerable to MITM, because anyone with access to one of them has access to the private key.


But in the second paragraph:

"there are no tools implementing MITM against an SSH connection authenticated using public-key method ... Being pressed to produce a PoC for this attack, I have attempted to implement it only to discover it is quite impossible and here is why."


What about "personal" private keys? I find it a hassle to add new keys for every device to each host I ssh into, so I usually just end up copying them from device to device..



Are you talking about SSH host keys? Those are normally generated automatically, and should be unique for each server.

(These are the things you're asked to verify the first time you connect to a new server.)


Sorry, meant my key, the one I use to connect with and have added the public part of to numerous servers.


There's nothing wrong with adding your public key to many servers, as long as you keep the private key secure -- almost everyone does this. The article is discussing sharing of SSH host keys, which allows impersonation of an SSH server.


That's what Userify is for... don't copy your keys around by hand! :)


SSH agent forwarding should eliminate the need to do that.


I'm not sure how I feel about Shodan. Their statistics are interesting, but apparently the primary use is finding vulnerable systems and matching exploits?

This is the list of most popular searches: https://www.shodan.io/explore


Given the purpose of the key, this doesn't necessarily seem like a security flaw. If individual owners ssh on to the system, then it is a flaw, but if the primary purpose is to allow system updates, this seems like a perfectly reasonable approach.


This was also the case with Digital Ocean's droplets on Debian for a time (they were re-imaging without re-genning keys.) It's not as uncommon as it sounds.

Userify (cloud ssh key manager) is getting a feature to force re-generation of server-side keys across an entire cluster remotely.


I'd suggest that a larger security issue is that the 250,000 hosts are unnecessarily running SSH to the world. A MITM issue doesn't matter if noone's using the connection!


Dumb question, why not have all processors contain a new hardware RNG based on thermal noise or some such thing? That was even VMs can get some stable entropy.


Great point. and Intel's and AMD's black box RNG generators (RDRAND etc) provide entropy... but they're thought to be compromised (which would be nearly impossible to detect). At least other firmware attacks would/should/might be detectable. By definition, RNG's produce unpredictable output -- detecting predictability is much harder than with other potential attacks (versus, for example, the sum of two numbers, which should always give the same result).

So, FreeBSD and Linux both use hardware RNG's as entropy inputs and mix-in other sources of entropy as well (which hopefully mitigates any loss of available entropy and also adds in other believed-good sources of entropy such as timing, network traffic, etc).

Ubuntu is using a new network-attached source of entropy which is itself constantly reseeded with the network traffic used to access it. (There's some inception joke there somewhere..) Of course, you may not be able to rely on the SSL/TLS connection that you're using to access it, so you might be seeding with an attacker's steady stream of 1's...

Of course, getting access to real hardware entropy in a hypervisored or virtualized cloud server/instance is the second part of the problem.

The third part is to make sure that SSH server and client keys are distributed properly. That's what Userify is for, but it really only helps with the client/user keys.. it doesn't help with server keys. (Yet?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: