It's good someone finally took the effort to write this up. When discussing some security by obscurity measure, other people almost invariably come up with the "security by obscurity is bad" slogan without having spent much thought on /why/ it is bad in that instance. Now this piece can be conveniently linked.
Somewhat differently, sometimes security by obscurity is confused with using a proper key. For example, with most webservers you can pretty safely restrict access to a file by giving it a filename containing some key that cannot realistically be guessed. Barring other ways in which an attacker could find out about the presence of the file – which is quite easy to fuckup in some environments, and is why you shouldn't do this if you really want to be secure – someone without the key cannot access the file, and thus it shouldn't be called security by obscurity; the method is public, the key is not.
> For example, with most webservers you can pretty safely restrict access to a file by giving it a filename containing some key that cannot realistically be guessed.
What I dislike most about that is that it's using traditionally non-key material as the key. Systems you're interacting with will probably treat that non-key material as not-a-key.
What should be said is that security only through obscurity is bad. Security through obscurity is fine as long as you're not relying on it. At the worst, it doesn't hurt, just don't count on it to save your bacon.
I do agree with the article, but at its worst, obscurity can hurt. Obscurity can add complexity, and it's said that "complexity is the enemy of security".
For example, sysadmins and maintainers could lose track of which layers are security and which are obscurity. In turn, this leads to decisions that compromise security because the maintainers overvalued the obscurity layers.
Not just added complexity, but added sense of false security. To be in a situation where you consider obscurity even a cursory part of your infrastructure is the moment you are relying on it by the nature of considering it even a fraction of total security.
It is only reasonable to consider when its both insignificant in the total security stem and you have the willpower of a monk to resist ever considering it valuable.
Yep, complexity, the false sense of security you get from an inflated judgement of what it gives you, and the 'opportunity cost' of time you spent devising, implementing, and dealing with it instead of some actual secure protection.
It's more like the parts that have to be secret should be small and easily changed (passwords, keys, etc.) rather than your protocols (which are going to be very hard to change).
There's no harm in a layer of obscurity over a secure system, but there's a social problem where it's more often used to obscure the workings of insecure systems than anything else. So all things being equal, one would prefer the cipher that's been well-studied by academics without being broken vs. the mystery cipher that claims to use all the most secure cryptographic constructs without giving anyone the details.
Obscurity almost always drive down script kiddies and generic bots away so the cost is amortized by the ressource you save and can focus on real threats.
Exactly. Maybe a better way to look at its benefits is by cracking "security" into two components: scalability resistance and penetration resistance.
Scalability resistance is the probability of a given vulnerability being successful against your configuration. Penetration resistance is the probability of your configuration being penetrated by an attacker focusing on it.
Scalability is a valid security threat as it defines the vulnerability time window due to scarce attacker resources (attack attempts/time unit). Yes, it does and should fail catastrophically -- either the attack is not applicable to your config or it has been adapted and is.
But just because plate armor had gaps didn't mean it was a stupid idea...
I totally agree with that. If you do this kind of thing, you have to be absolutely sure that it's never used as an excuse for not having actual security.
Just came in from grading a bunch of student-made systems. Several of them were proud of having their passwords secured in the database - by storing unsalted SHA1 or MD5 hashes.
It's like going through a checklist of desired features, finding "security" on it and doing any single half-assed thing to check that box.
I'd take that a step further and say "defense not in depth is bad." Don't rely on any single method (mathematical, steganographic, misdirection, anything) if you want to be really secure.
For example, with most webservers you can pretty safely restrict access to a file by giving it a filename containing some key that cannot realistically be guessed.
While that might feel like your security is an unguessable filename, the actual security you're relying on is the server not listing what's in a directory - which is reliant on every service on the machine that has file system level access to list a directory being secure. If an attacker gets access to any of them then they can get to your file.
In other words, while the key is not public per se, it's very easily attacked through unrelated services. That is a problem.
Add to that other systems which treat URLs as data with little security value - e.g. proxies or browsers that record visites URLs and may allow other people to reconstruct the filename.
Like with the ssh example, the vast majority of attacks (I mean in terms of sheer numbers) are dumb, automated and done in bulk. e.g. Like just requesting the file "admin.php" on any website. Like trying to login with user "admin" and common passwords.
If you don't have these files/logins then from a theoretical point of view of securing "every service on the machine" you haven't done anything; but as the 18,000 : 5 ratio in the article shows, you are in fact safer by halting the vast majority of naive attacks at that layer. And at a fairly low cost.
"someone without the key cannot access the file, and thus it shouldn't be called security by obscurity; the method is public, the key is not."
It's not a key, just like a username is not a password, it's an identifier, and as such cannot be replaced.
If you are ever going to share the url with anyone, they'll have to be notified manually when the url changes, and they can't share the url anywhere for fear that it'll be found out by bad actors.
Compared to a an authentication scheme that every user has it's own credentials to remember and the identifier can be freely shared without worrying that unauthorized individuals can access it.
Reminds me of the Slack attachments vuln a while back. IIRC, the data was public under some hash which is (realistically) impossible to guess. Except it wasn't for Reasons (tm), which made it bad.
Yes, there always has to be some separation between your secrets (keys, passwords, etc.) and the non-secret parts (methods, protocols, etc.).
The point of avoiding security by obscurity was that people were always tempted to make the methods, protocols, etc. secret, rather than a few small, easily-changed bits like passwords.
Because you can't very well go around reprogramming entire systems, this means that the security fails in a very bad way when compromised as compared to other systems where you just reset the passwords/keys/etc.
> with most webservers you can pretty safely restrict access to a file by giving it a filename containing some key that cannot realistically be guessed
In my experience (yes, I've actually experienced this, multiple times), it almost always ends up indexed by google anyway, if it's on the public web.
I guess a robots.txt can go far with regard to google specifically.
And I guess that is essentially what Dropbox 'private links' (without password protection you get with a paid account) are, sure. And they seem to work... okay. But I sure wouldn't rely on anything particularly sensitive remaining private in a dropbox unsecured private link.
Google anti SEO gaming has a lot of "security" through obscurity aspects to it, so, it's not as though the technique doesn't have real world valid examples of usefulness.
That's a good point. If they were open on how they measure page rank though, it would force them to more rapidly improve their algorithm. Right now, it seems people slowly figure out how to game it and Google have to change the rules again (e.g. hidden content, follow/nofollow links, guest blogging).
If they showed their hand, it'd just become a race to utter chaos. People would use the recipe for lousy sites. And then Google would have to redefine things over and over and over (this is allowable and that's not, and that's no longer allowable, except like this and that, except when this...
This slows down the evolution rate significantly. And gives them plausible deniability.
Spam filters and anti-virus vendors seem to cope OK with having their algorithms understood though. It's always going to be an arms race but I see your point that using obscurity slows down the need to refine earlier. It'll be interesting to see what the final stages of the algorithm is as it converges on a proper definition of what relevant content means.
I've often felt there was an interesting discussion to have about obscurity. It's interesting because although the infosec community has such a strong reaction to the phrase "security by obscurity", it has a very different reaction to the phrase "obfuscation".
I'd like to propose a distinction between the two:
- "security by obscurity" would refer to the non-declaration of configuration and parameters
- "obfuscation" would refer to the deliberate fuzzing of statistical means by which such configuration and parameters can be discovered.
The distinction is admittedly fuzzy, but the idea is that "security by obscurity" is trivially defeated by treating the target system as a statistical problem. If said system can be repeatedly probed, then useful information can be gleaned. With obfuscation, one is deliberately injecting noise into the signal that the attacker is trying to analyze.
What do you all think? Is this a useful distinction? To be perfectly clear, I don't mean to imply that obfuscation is a sufficient security mechanism, but rather:
1. that obfuscation is more useful than plain-old obscurity
2. we should understand that infosec attacks are often statistical in nature
Regarding point 2, I think this ends up shedding more light on what, exactly, is wrong with "security by obscurity" as it's often debated.
On point 2 I've recently come to the conclusion that I can't worry about every damn thing. I follow all the ITSec news pretty closely and there's just SO MUCH WRONG with everything (or at least it feels like it). Security researchers seem to love to freak out about everything (it's their job after all)_but I just don't have the time. If something is going to be a likely attack, easily done, remotely, today or next month, then it's something I need to worry about. If this is a theoretical attack based on something happening in the room next door I need to shrug it off, for now. There's many many things that fall in between those two extremes, and that's where I'm afraid I'm not sure what to worry about sometimes.
I'd like to follow up on your post because you raise a lot of good points. In particular, we have a problem in infosec whereby "security" is implicitly equated to "security with cryptographic guarantees".
I understand that "cryptographically secure" is the gold standard for infosec, but this misses a certain reality on the ground. Often times, your system doesn't need to be Fort Knox: it just needs to be a more difficult target than the other guy's system.
I think this is where obfuscation comes in. With regards to well-known vulnerabilities, clearly cryptographic-grade solutions are required, but the best defense against zero-day attacks is to make your system hard to probe for odd behaviors. Why waste time doing statistical analyses on a system in the hopes of extracting a faint signal when another system literally responds to a port scan?
I think the camouflage analogy in the article is a good one. Hardening your asset is necessary, but hiding it is even better.
In my former military days, we were always told that "concealment is better than cover". It's hard to pull a flanking maneuver on a target you can't locate, but trivial to do so against a target you simply can't hit. I wouldn't go as far as to claim that the same is true in infosec, but it stands to reason that there's a place for concealment.
I think camouflage and decoys are useful analogies.
I run ssh on non-standard ports. Not because I think it makes me any more secure, but because it allows me to either ignore all the automated scans/connections to port 22 altogether, or to proactively blackhole any ip address who attempts to connect to a decoy port 22...
It doesn't mean I don't continue to need to do all the things I need to keep ssh secure as well - keep my sshd updated, disable password auth, shut down root connections, etc...
>I think camouflage and decoys are useful analogies.
Indeed I've found that most military analogies work quite well for infosec!
>Not because I think it makes me any more secure, but because it allows me to either ignore all the automated scans/connections to port 22 altogether, or to proactively blackhole any ip address who attempts to connect to a decoy port 22...
This is another excellent point. Not only are we fuzzing the signal with respect to automated attacks, we're also improving the signal-to-noise ratio in our own security analyses.
>I cringe every time someone says "it will never happen" or "it is too hard to exploit".
I think you misunderstand me. I'm not saying "it's too hard to exploit"; I'm saying "this helps me keep a low-profile and makes me a less appealing target". It's primarily a social-engineering hack in the sense that for all but the most targeted attacks, attackers chose easy targets.
By analogy, I'm suggesting you not walk alone in a shady alley at night.
>But will you still detect the problems on your security make-up if you mostly rely on the fact that the vulnerability is not seen?
This is a valid and interesting question, imho. I would reply with the following questions, in trying to determine whether the cost of obfuscation outweighs it's benefits:
1. do you really have the means to continuously audit your attack surface?
2. what is to prevent you from rolling back your obfuscation mechanisms for a periodic audit?
3. can you configure things such that your obfuscation layer only points to external traffic (thus allowing for continuous auditing via some private port)?
4. more to your point: you seem to be implying that you'll never have a configuration error on your system. Isn't it true that obfuscation can potentially guard against the occasional glitch?
There is a definite trade-off involved in obfuscation, but unless you're one of those (admirable!) people who use the best cryptography to privately discuss the best cryptography, I think the judicious use of "camouflage" generally improves security.
>Often times, your system doesn't need to be Fort Knox: it just needs to be a more difficult target than the other guy's system.
This is, of course, until someone actually wants to hack you.
>but the best defense against zero-day attacks is to make your system hard to probe for odd behaviors
This really isn't in line with the previous statement I quoted, rather unlikely that you'll get targeted with zero-day attacks unless someone is actually trying to hack you.
In contrast, I think they're getting progressively more and more useful in light of an increasingly larger and more automatedly hostile world network.
I imagine in the 1970s there weren't a whole lot of hacks that began with "scan the entirety of ARPANET to find a few vulnerable machines, then apply vulnerability."
>I imagine in the 1970s there weren't a whole lot of hacks that began with "scan the entirety of ARPANET to find a few vulnerable machines, then apply vulnerability."
My point was that the economics that justified 80s worms didn't exist in the 70s and earlier. There simply weren't enough systems networked to make an automated attack preferable over a bespoke, single-system approach.
And extrapolate that change into the future with more and more machines of standardized config being connected to the internet... and automated attack and use becomes more and more attractive. Which means prevention of automated attack should become more useful.
It seems here that the author has a different definition of security through obscurity to many who criticise it.
The important aspect is whether the knowledge required to break the system is the system itself, or the key. AES is not security through obscurity because everyone can look up how it works, and it's only the key that we have to keep secret. Port knocking is the same (just with a weaker key). Moving your SSH to port 24 however is security through obscurity because if anyone knows the mechanics of how you use it, they can use it.
This differentiation is subtle, and can be somewhat of a grey area sometimes, but I find it good at differentiating between the two most of the time.
Couldn't you argue that the port SSH listens on is a 16 bit key? It's small and easy to brute force (among numerous other ways it is easy to discover) but I don't really see distinction you are trying to make.
At the information theoretic level, isn't everything? The only thing that makes this more like a 'key' is that there is a simple test for it. More complex configurations of obscurity are not really brute-forceable, just discoverable.
> Moving your SSH to port 24 however is security through obscurity because if anyone knows the mechanics of how you use it, they can use it.
An attacker still needs to authenticate against SSH on that non-standard port. The whole world could know about it, but the security of the system is not greatly impacted.
If you're running a real service behind that port, then those "helpful automated test systems" aren't helping you since either they don't succeed and don't do anything, or they succeed and hurt you.
If you're running a honeypot behind port 22, well, that works well with putting the real service on another port or behind port knocking.
I agree that obscurity has value in security designs --- security is in large part about imposing asymmetric costs on attackers.
I do not believe that port knocking and single-packet authentication are good examples of this; both are sort of textbook examples of cosmetic security mechanisms that are made irrelevant by SSH public key authentication.
"I do not believe that port knocking and single-packet authentication are good examples of this; both are sort of textbook examples of cosmetic security mechanisms that are made irrelevant by SSH public key authentication."
If there's a remote-root buffer overflow for sshd that does not involve login at all, doesn't hiding the sshd add quite a bit of value ?
That is, there are attack vectors that we know exist, and that we have seen in the past many times for server daemons, that occur before, and override, any authentication mechanism.
By hiding the service, you take yourself out of that population of low hanging fruit ...
You don't find that valuable ? I personally find it extremely valuable and consider port knocking to be one of the very few security optimizations that has such low cost/complexity for such a pronounced and well defined gain. I would not think of deploying an Internet facing system without port knocking on the daemons that don't need public access.[1]
If you believe that sshd might harbor a preauth RCE --- something that hasn't happened in a very long time --- then deploy real security in front of it: stick it on the other end of a simple encrypted tunnel.
ROT13 would stop and attacker peeking over everybody's documents, one doc per second, in search for something interesting to actually devote time and resources on.
You have to decide if the tradeoff between a thousand bot's scans and a sole eavesdropper noticing the knocking pattern and marking your host as "(incompetently) trying to hide" is really worth it.
One commenter on a previous iteration of this argument said that moving the port (not port knocking per se) made the access logs quieter, and that was reason enough for their use.
Edit: heh, same thing is going on a few comments down :)
There are important differences too though (beyond the fact that SPA is not encrypting/decrypting traffic for SSH itself). SPA is a UDP authenticator so it cannot be scanned.
No, SPA is the first picture: both SPA and OpenSSH are directly responsive to attacker communications. I don't think "attack surface" is the dispositive argument here (the fact that SPA doesn't protect OpenSSH connections at all is), but either way: SPA is inferior to spiped.
Not exactly. OpenSSH gated by SPA can only be interacted with by an attacker that can either hijack an SPA-authenticated connection, or is on the same network as the SPA client if the client must go through a NAT. This is a fairly limited set of possible attackers. For those not in this set, how can they interact with OpenSSH without first breaking SPA?
It protects you from every zero day except the very limited set of preauth zero-days.
Look: it is legit to be paranoid about ssh zero-days (I am not, partly because that vulnerability is so valuable that I am very far down on the food chain for it). But if you're paranoid about those, be serious about them. Put an encrypted tunnel in front of sshd, not some goofy authentication scheme.
Remember: if there's a preauth sshd zero day, the SSH connections themselves are attack vectors. TCP is easily hijackable.
Sure.. but now you have to worry about vulnerabilities in whatever software you are using for that. Other than maybe something like a locked down spiped I wouldn't trust anything more than ssh in the first place.
From my perspective the nice thing about something like the 'simple non cryptographic knockd' was that other than decoding ip packet headers it did not read any data from the network.
> if there's a preauth sshd zero day, the SSH connections themselves are attack vectors. TCP is easily hijackable.
Hah, I am not that paranoid :-)
My main concern was always this sort of scenario:
I am on vacation with no internet access, an ssh (or openssl or openvpn) 0day is released, patches are not available. Do I come back from vacation to a compromised host? If I had not been using port knocking when the debian ssh key issue happened, I would have had multiple hosts compromised. So, you can call it goofy, but it made a very real difference in my experience.
I had a machine compromised in 2001 by script kiddies and worms scanning the internet on port 22 and compromising any host they could find with CVE-2001-0144. Not by attackers hijacking TCP connections.
In 2016 I am worried about a 0day being released and scripts kiddies or a worm compromising every host they can find running sshd. This effectively happened to debian systems in 2008. I don't know how you can say that it doesn't make sense when it happened.
I just don't understand the threat model that gives attackers the most valuable RCE vulnerability in a decade, but denies them an ability attackers have had since 1996.
Port knocking does not protect you either. A dedicated attacker can wait and discern your knock pattern and if you have an unpatched server, attack it.
In fact, it can give you a false sense of security from such 0-day exploits.
For most of us, the threat is the exact opposite of a dedicated attacker. It's an opportunist (or automated system) that doesn't need to compromise every system on the Internet. It just needs to compromise enough. You can compromise enough for almost any purpose without getting into getting past port knocking schemes.
I wouldn't focus on "security" when it comes to port-knocking.
There are benefits beyond security: for example, less strange login attempts mean less clutter in your logs. If you're actually trying to investigate login attempts (don't!), it means less time wasted.
The "security by obscurity" screaming people are usually thinking too narrowly, in my opinion.
In discussions of port knocking and SPA among my security friends, it always comes down to this argument: cleaning up the logs. Why not just alter the way SSH logs? Perhaps it can just log once every couple of minutes, "yes, people are still probing this server".
Running sshd on port 22, seeing repeated authentication failures in the logs. conclusion: business as usual for sitting on the internet. signal to noise ratio: almost 0.
Running sshd on port $RANDOM, seeing ANY authentication failures in the logs. conclusion: someone is specifically targeting me or my network, I might want to pay some attention to this. signal to noise ratio: 1.
Running on a random port isn't the same as using port knocking; if you're on a random port you'll get hit anyway (I think it falls under 'business as usual for sitting on the Internet'). Port knocking & single-packet authorisation, though, do provide some protection: if you get a failed login attempt via PK/SPA-protected SSH, then you know you're targeted (and also, your PK/SPA key is bad).
> if you're on a random port you'll get hit anyway
Nope. At least, not frequently enough to make a lucky guess more likely than a concerted attempt. Not one auth attempt since I switched the SSH port, and this is for a public-facing website, too.
They actually do, just somewhat later than targeted ones. No need to scan whole of internet either, just subnets you haven't violated already. Or use DNS for major targets.
I've allegedly been involved in operating some of the largest *nix botnets ever, and certainly haven't seen anyone do that.
Scanning every single port on /0 simply takes way too much time even with millions of nodes, you're much better off improving your coverage over the common ports and catching new (and old) equipment coming online.
Of course, this wouldn't necessarily be true with the "preauth opensshd 0day" brought up in TFA. Such an exploit would certainly give you more than enough BW to do whatever you want, and break the internet while doing it.
I think your disagreement stems from a failure (on your part and the author's) to distinguish between targeted and dragnet attacks.
With regards to the former, I emphatically agree with you. If you've been singled out by a competent attacker, you're subject to his statistical probing of your system, which is bound to reveal most (if not all) of the configuration/parametrization info you're trying to hide.
In the case of dragnet attacks, however, I emphatically disagree. There is asymmetric cost imposed upon the attacker: time.
Your door doesn't have to be impenetrable; it just has to be stronger than your neighbor's.
I agree, for example: When a WordPress vulnerability is found, millions of sites are at risk. The large installation base is enough incentive for people to relentlessly scrutinize WordPress for vulnerabilities.
Your home-made CMS isn't going to attract attackers because the spoils are limited to one site, and the code isn't readily available.
That doesn't mean your CMS is more secure; it probably isn't. But it's going to take a focused effort to attack, and an exploit becomes less likely.
Even making minor changes to the Wordpress URLs from the defaults is enough to avoid getting hit by most (if not all) of the automated exploit I see on a day to day basis.
Why would you ever want to be vulnerable to targeted attack while only keeping dragnet ones at bay?
Also such measures do not help any kind of a system that has to be really exposed. Want to hide your http and mail server behind a port knocking mechanism? A rarely supported or proprietary TLS method only?
Maybe "dragnet" wasn't the best term to use. I'm not talking about the NSA, but rather about automated brute-force attempts (think: random attempted ssh connections from the internet).
With regards to that, not stopping automated attacks in general is fully-independent of being a less appealing target for such attacks. The latter -- even on it's own -- improves security.
No, it's an argument not to waste time with pointless layers that impose no significant costs on attackers. Surely, you can't believe that "defense in depth" justifies arbitrary layers, or you'd also advocate for ROT-13 tunnels.
No, but I disagree that running sshd on an alternate port is an arbitrary layer. It is PROVEN that it prevents the automated mass scanners from even attempting to login. This you admit yourself when you mention the reduction in log noise.
You are correct that a properly configured sshd has nothing to worry about with respect to these mass scanners. However, can we guarantee that we'll never make a configuration error? You're also correct that a pre-auth sshd vuln would be a HUGE deal. But can we guarantee one will never happen? Do you disagree that if a misconfiguration or a world-changing vulnerability were to happen, that a little time more to deal with it would be helpful?
If I run something that is in any way vulnerable (whether via bug or misconfiguration) on port 22, it will be found within minutes (based on how often I see that port getting scanned today). Unless I'm being specifically targeted, an sshd running on an arbitrary port isn't going to get hit until/unless someone is running full port scans across the internet looking for it. I don't understand how you can argue that this is worthless. A measure that isn't useful against a determined and targeted attacker can still be useful. The adversaries that most of us face in the real world are looking for the path of least resistance, not access to our specific systems.
By the way, this is the same reason I lock my door even though there’s a glass sidelight right next to it. Any burglar who wants to get into my house can do it in about 5 seconds by breaking that sidelight. But that doesn’t mean there’s no value in keeping out the mischievous neighbor kids who are just rattling doorknobs and looking for a place to steal booze from.
I wouldn't think of alternate port as any kind of layer whatsoever. It defends against the horde of ants that are knocking on any open port 22. It does not defend against the kind of attack that you should worry about--that of a sophisticated, patient well funded attacker.
Moving from port 22 to $RANDOM will only net you annoying log bloat from things that you don't really care about.
You're assuming everything is properly configured. A bug-free setup is essentially impossible.
The claim is that obfuscation makes it less likely that a mistake will be spotted by an attacker.
To be clear, I agree with you to the extent that a properly configured SSH server with good keys and no 0-day exploit is more effective than what I'm proposing. My point is rather that such a setup makes a lot of assumptions (key exchange, configuration, secure updates, etc...) and that obfuscation is a complementary security layer for when those things occasionally fail.
Come on, now I'm starting to suspect you're commenting in bad faith.
Just to reiterate, SSH was an example of the general problem that is misconfigured/buggy software. OpenSSH is certainly one of the most secure programs I'm aware of, but the same can't be said for things like wordpress.
Eventually, something gets messed up (if only momentarily), and being nonstandard makes you less susceptible to scripted breaches.
We can debate whether the tradeoff is worthwhile (and in many cases it's not), but it's a valid gosh-darn secruity layer!
Could you explain why single-packet authorization is cosmetic? This is the first time I've read about these, and while port knocking is clearly broken in multiple ways, I don't immediately see the flaw in SPA, except for the fact that storing hashes of previous packets and comparing against them is inelegant.
I can copy your knock and get access, I can deny you service by messing up your knocks by sending packets with a fake IP, it is also insanely inefficient in the amount of data and time spent vs the amount of information exchanged
Who cares? Serious attackers have access to the actual TCP connection you make to sshd. If there's a preauth vulnerability in sshd, they'll just use that to deliver the payload.
Port knocking is silly. If you're scared of preauth sshd zero-day, deploy an encrypted tunnel.
Replay attacks don't work against proper port knocking.
Take for example, knockknock-daemon. It sends just one SYN packet where packets IP and TCP headers are encrypted request to open a specified port. Port is closed after connection.
Entirely possible, as I said this is the first time I read about this, so the port knocking I refer to is the kind described, which is using a sequence of ports as a key.
I don't know python but this implementation seems broken in the same way (or more precisely, under the same circumstances). I just realized, you don't even need to copy the knock. If you have access to the network traffic (which you would need to copy the knock in the first place), you also have access to the TCP sequence numbers. You can just connect from the user's IP after he performs the knock, or even let him connect and inject packets at some later point. This program seems to rely on the TCP authentication of IP address which is completely negated if the attacker can monitor traffic.
The goal of knocking is to expose running services as little as possible and do it in a stealthy way.
An observer watching packets has no way to know that the SYN packet transmitted is a port knocking request. Even if they know, there is no way to determine which port was requested to open.
I don't understand what you mean. How would you prevent the attacker from knowing the port, except by only sending the port knock, and then never actually connecting?
Let's say the attacker has no idea you're using port knocking and even somehow missed your port knock packet completely, but after that captures subsequent traffic. He will still see the sequence numbers in the SYN/ACK from the server which is all he needs. Once he has that, he is an equal party to you (the legitimate client) in that connection.
You don't need to be able to do a full MITM attack. You just need to be able to read the traffic, not modify it.
And if you believe this is an unreasonable assumption, why did you link me a program which says in its description "The problem with the original concept was that if your port sequence was observed by passive eavesdropping, it was easily replayable." ? It tries to solve this exact problem - a sniffing attacker.
I am just explaining why this program fails to solve the problem you claimed it solved in your original reply. Surely you can understand that.
Service is listening and accepting new connections in well known port. When client makes connection attempt, attacker can replay the connection attempt and server starts handshake. Even without sniffing attacker can initiate handshake by just requesting connection.
With knocking:
Server/firewall opens requested port for client who knocks. That port is _not_ open for subsequent connection attempts. Simple replay is not going to initiate handshake. You need to block client from sending.
It is irrelevant whether the legitimate client connects or the attacker does if the attacker is sniffing traffic. This is just how IP and TCP work.
If I am sniffing traffic, I can fool the server into thinking I am the client and do what I want, while fooling the client into thinking I am the server and make him happily drop the connection after he sent the initial packet I needed to find out the port.
Basically, in IP protocol you can say you are any IP address you want in your packet. There is no way for the receiver to tell whether you're lying[1]. Then TCP introduces sequence numbers -> if I say I am 1.1.1.1, but I am actually 2.2.2.2, the server will send a sequence number to 1.1.1.1, and expect that number + 1 in response, but as I am not 1.1.1.1 I will not receive his sequence number and be unable to reply with number + 1 - I don't know the number. However, if I am sniffing traffic, I will see the sequence number he sent to 1.1.1.1 anyway, and I will correctly reply with number + 1 while continuing to lie I am 1.1.1.1. So it makes no difference to the attacker whether the server is talking to 1.1.1.1 or 2.2.2.2 (attacker's IP) as long as he can sniff traffic. It also makes no difference whether the legitimate client initiated the connection or the attacker did. It is just packets, it doesn't matter who sent them.
I am not sure if I am understanding what you're asking correctly. Basically, you don't need MITM capability to pretend you're a different IP than you are. Lying about your IP is a basic feature of the IP protocol. And if you are sniffing traffic, it is also not in any way prevented by TCP.
[1] except in special cases when the sender claims something clearly impossible, like claiming to be an IP from a different interface (claiming to be 127.0.0.1 for example) than the one the packet was received on
If client has already sent connection request, replay is not going to work if only one connection is accepted. You need to be able to block the traffic from legitimate server or be faster (how?) to get there first.
There is nothing magical about a "connection". It is not some kind of secure tunnel. There are only packets. The attacker can take part in the accepted connection that the legitimate host initiated and the server accepted, and pretend to be the server to the client and pretend to be the client to the server, as long as he has the sequence numbers, which he gets from passive sniffing.
SPA is applicable to arbitrary services - not just SSH. Essentially it is a lightweight UDP authenticator, and it can be applied to commercial VPN's, webservers, or anything else. Achieving asymmetric costs on attackers is easily achievable for many such services.
The real problem with obscurity as a security layer is that it complicates day to day work that you'd be doing with that system. Each new person needs to be constantly trained and reminded about your specific system quicks and configuration outside the standard norm before they can do work using the system. That has a real cost, and it is a cost analysis question.
That really depends on the use case. We too have sshd listening on a non-standard port and the overhead to tell people what to put into their .ssh/config is negligible.
We use SPA only on very few select servers that less people need to access and those just get the shell script they need to run instead of typing "ssh $host".
After about 2 years of running things this way: The time saved in checking system logs in a single week was much more than the time spent on "training" people in those 2 years.
There's a number of things I disagree with in the article, but it does have a few good points.
Here's what I disagree with and why:
- Portknocking. I've found from experience that it's far better to allow SSH access (for example) from only known IP addresses. Portknocking is far too easy to beat and really doesn't impede much.
- Non-standard ports. Sure if you're only interested in blocking bulk network scanners that limit themselves to known ports. Any manual scan or a solid in-depth scan is going to map every one of the lower 1024 ports, and possibly the rest depending on how interesting the target is.
- The Tank camouflage example. It all sounds fine and dandy until a maintenance crew roam the desert for 10 days looking for a tank they can no longer see. Same with security and IT... obscurity leads to lots of wasted time when newer techs try to diagnose things that aren't as they seem, and are undocumented. Not only that, but the since the enemy know that the new armour requires a special ammunition to beat, they will just throw new ammo at everything that moves in case it is a tank. i.e. you're going to scan for hidden SSID's, your going to nmap every port, etc etc. Takes more time, but you still get in.
- If there's a 0-day SSH vector, it's getting owned no matter which port it's on unless your security team are on top of patching. What if the new-hire that's told to go patch all the SSH servers accidentally misses the undocumented one that's running on port 24? It also doesn't matter if there's 10x more hits on port 22 than 24. All it takes is 1. It's that simple.
I just don't think obscurity belongs in an environment where clarity matters so much.
> Portknocking is far too easy to beat and really doesn't impede much.
If you have to guess a random 3 port sequence in a 65k port space, how long will it take you to break? at 1 try of 3 ports per second I get almost 9 million years for exhaustive search.
Why guess when you can just sniff the network for the sequence?
Port knocking requires the network that you're using to knock is in fact as secure and trusted as the one you're knocking. So there's really no point as you could easily just limit SSH access to that network and save yourself all the bother and risk.
Well, yes, technically any amount of obscurity you add to your service increases your overall security.
So let's go. You make a 256 bit level key. That's 256 bits of security there. For each binary thing that must be guessed together with the key (let's call those multiplicative), you gain an extra bit there. So, if you could somehow (what you can't) not disclose your encryption algorithm, you'd gain some easy 3 extra bits of security there.
Now, if you have a secret that can be verified independently from your other secrets (let's call those additive), you'll gain one extra bit of security for every secret that has your overall security level. That is, if you add a 256 bit secure port knocking step (16 knocks on completely random pots) to your 256 bit secure key, you'll get 257 bits of security overall. If you add a 16 bit non-standard port, you'll get some fraction of a bit, starting with some dozens of zeros after the comma.
Thus, since security is all about trade-off, think very hard about the costs of any additive measure you want to create.
I think you're on to something, it makes sense to define the 'key' to represents everything that is secret, making "security through obscurity" contradictory.
That said, the way you treated the additional information is somewhat disingenuous. It disregards that even though there might be 2^16 options they're not equally likely, assigning the same amount of information to each doesn't really represent the situation well.
An alternative method would be to measure the information using the Kullback-Leibler divergence, which represents the information disadvantage a distribution has to the actual distribution.
In this case the attacker would believe the key to have a probability distribution, whereas the target knows the key exactly. In which case the expression for the KL-divergence is simply -log(p), where p is the probability the attacker assigns to the actual key.
When there are N options, all equally likely this agrees with the usual notion of entropy. However if the attacker assigns a very low probability to some option (i.e. using a different port) then the effective key complexity could become quite high. If the attacker considers the key impossible (i.e. p=0) the the effective key complexity would even be infinite.
In the article they gave an example of 18,000 connections vs 5, which would effectively give you 12 extra bits of security (with very little effort).
I'm not sure entirely sure what the optimal tactic would be for searching a non-uniformly distributed key space, but from it's definition the KL-divergence does measure the amount of information to determine the 'actual' distribution. I strongly suspect that there is no way to get around this (on average).
This argument seems disingenuous. It's no great insight that port knocking reduces unauthorized login attempts. What's disputed is whether it's better to have a system that's rarely challenged vs. a system that's regularly challenged and still holds. The argument against is that adding layers of bad security makes it easier for problems in the "real" security to go undetected or ignored since the obscurity layer works too well.
I agree people confuse obscurity with "operational security".
Reducing your footprint, and keeping information disclosure to a minimum isn't obscurity.
On the more specific examples while port knocking is an acceptable security mechanism (so is geoblocking, and even whitelisting specific IP's only) it's again not obscurity, putting connections on random ports is and it can also be a very bad practice because obscurity often works both ways.
In a large organization even one with great documentation and knowledge management things will fall between the cracks, if we take their example to the then while the attacker might be slightly less likely to identify a non standard SSH port (because for some reason they assume that port scanning is expensive) it can also mean that your own security teams and tools can miss them just as easily.
When heartbleed came out for example people run it on the common SSL ports internally, pretty much all of it was patched rather quickly but from time to time you still find an instance of it and usually it's because some developer somewhere decided port 23216 is a good port for SSL.
The attackers are considerably more likely to perform allot of information gathering on their target, more so than internal teams, internal security tools often limit their tests to standard and specified ports because when the security team has a 4 hour window each month to turn their vuln scan a full TCP port scan is probably out of the question on a network with few 1000's of IP's and larger.
By adding unnecessary obscurity to your system you are effectively only increasing the likelihood of you missing something while the attacker is just as likely to find it as anything else.
Heavily agree with this. I read people stating "security by obscurity is bad" without really thinking about it all the time.
For example, I think changing the SSH server port and WordPress login pages are a good idea because 1) if a hacker cannot find the login location in the first place, your chance of getting hacked must decrease and 2) the number of intrusion attempts being logged will be significantly less so you can more usefully survey these for targeted attacks.
Of course, relying only on obscurity is a terrible idea but you should have several layers of obstacles so if one fails the security violation will get caught at the next layer. Avoiding security through obscurity completely is more something you should do when designing a secure protocol or algorithm.
There is the concept of "Defense in depth", used by the military for many years...
This acknowledges that each layer is bound to have holes, therefore the best you can hope for is to delay the inevitable breach.
Therefore, any additional layer you can add to the security increases the time taken and hence gives you a better chance at rebutting the attack.
It's still pretty expensive for very little added security benefit, which the added complexity might negate. How many bits of security does this add? 5 or 6? Maybe 32?
People deploy ad hoc obscurity hacks like port-knocking because we have doubts our standard access-control infrastructure (e.g. libssl, SSH). It would be less wasteful to take the resources being spent on these hacks, and pool them together into projects that would boost our confidence in the infrastructure, instead.
I can think of lots of security infrastructure projects that could probably use more resources:
- The various libssl projects could probably use more resources.
- There is at least one team working on formal proofs of implementation correctness for libssl.
- Sandstorm.io is developing ways to move web access control (including CSRF protection) to an intermediate layer, rather than relying on individual web applications to get it right.
- U2F is trying to bring cheap hardware tokens to web users, but it needs better support across the web.
- Let's Encrypt/ACME is working on making it feasible to deprecate plaintext HTTP. There's still a lot of distro integration work to be done.
- There's an IETF working group working on moving transport security into TCP itself (tcpinc). If it's done well (i.e. gets enough eyeballs), this could replace "libssl version hell" with straightforward socket file descriptors.
- Most developers have no idea how to develop formal proofs of implementation correctness alongside their code. Educational materials would be great, here!
- There are many, many legacy systems running still known-insecure software/protocols that need to be upgraded or worked around.
Moving the SSH port is a bad idea. The article falls into a number of common pitfalls here.
A connection to your SSH server does not usefully equate to someone "trying to hack" your server. If I had $1 for every time I've heard this complaint, I'd be very rich, but it's total rubbish. Stop worrying about it.
The mass scanners that fill your logs with brute force attempts on port 22 are looking for trivially obvious username/password combinations. If there is any chance they could actually get in with this approach, you've screwed up so badly that moving the port will not save you. If you're using key-based authentication, no amount of scanning is ever going to compromise your server, unless the attacker learns your key.
The reason moving your port is pointless is that against an attacker sophisticated enough to have any chance of compromising your SSH server, no amount of hiding the port is going to do more than delay them a few minutes. Compromising SSHv2 is hard. You need either a key compromise (difficult to achieve, likely some sort of APT malware against your laptop), or a zero-day sshd vulnerability. Against someone motivated and able to do that, your "clever trick" of moving sshd to port 24 is totally useless.
It's a bad idea to propose, too, as it will require users to either update SELinux rules to allow sshd to bind to a different port, or disable SELinux entirely. And as many of the ports <= 1024 are used for other things, users will tend to do things like bind to ports > 1024, such as 2222. As any system user can bind to ports > 1024, you're actually reducing security, and opening up a potential priv escalation vector (user with unpriv shell access causes sshd to crash via XYZ, starts their own malicious daemon on 2222, steals your credentials, etc).
There are cases where obscurity helps. Reducing the amount of information an attacker has about the environment they are attacking is often a very valid defence, and is something any security audit should consider. Moving the sshd port is not one of these cases.
In short, make sure you're using key-based authentication (ed25519 is preferred IMO, but happy for others to debate this). Make sure password authentication is disabled. Make sure direct root ssh is disabled, and passwordless sudo is disabled. Leave the port sshd listens on well alone - it's just fine where it is.
Let’s say that there’s a new zero day out for OpenSSH that’s owning boxes with impunity. Is anyone willing to argue that someone unleashing such an attack would be equally likely to launch it against non-standard port vs. port 22? If not, then your risk goes down by not being there, it’s that simple.
Also, from the article, please note paragraphs like:
"I configured my SSH daemon to listen on port 24 in addition to its regular port of 22 so I could see the difference in attempts to connect to each (the connections are usually password guessing attempts). My expected result is far fewer attempts to access SSH on port 24 than port 22, which I equate to less risk to my, or any, SSH daemon."
I was pointing out that this is silly. Those "attempts to connect" are totally meaningless, as I explained.
I guess I just don't understand your complaint. The author makes it pretty clear that this is an additional layer on top of already-good security. So it only delays a sophisticated attacker for a few minutes, maybe, but maybe that's enough delay to get that attacker to move onto an easier target. In any case, I don't think this is meant specifically for sophisticated attackers targeting your machine specifically.
The author also posted some data:
"I configured my SSH daemon to listen on port 24 in addition to its regular port of 22 so I could see the difference in attempts to connect to each (the connections are usually password guessing attempts). My expected result is far fewer attempts to access SSH on port 24 than port 22, which I equate to less risk to my, or any, SSH daemon.
[ Setup for the testing was easy: I added a Port 24 line to my sshd_config file, and then added some logging to my firewall rules for ports 22 and 24. ]
I ran with this configuration for a single weekend, and received over eighteen thousand (18,000) connections to port 22, and five (5) to port 24."
> Defence against sophisticated attacker is also good against a simple one
Sometimes there's no defense available. Like in the case of a 0-day. In this particular case, defense against unsophisticated attackers script-kiddies will save your butt.
Obscurity is "bad" security because when you're told "how it works" it's usually very weak and easily defeated.
Pairing up obscurity with "good" security should be more of a "do the benefits outweigh the costs?" type of question and only each site can answer this question.
I sincerely believe no one, crosses finger, argues that security by obscurity alone is good security. -.-
> Let’s say that there’s a new zero day out for OpenSSH that’s owning boxes with impunity. Is anyone willing to argue that someone unleashing such an attack would be equally likely to launch it against non-standard port vs. port 22? If not, then your risk goes down by not being there, it’s that simple.
You need to compare equal-cost approaches to security. For the amount of effort it takes to use a non-standard port, you could use a real security measure like single-use passwords instead - which would increase your security more?
And you need to measure what your outcomes look like. If there's a new zero-day, do you really think those 5 attacks on the non-standard port won't use it? Do you think a targeted attack wouldn't use it? The result of getting caught in an ssh-scan of the whole internet is relatively benign - your server gets used to send some spam, you wipe it and rebuild. That's not the existential threat that security measures are about preventing.
Security BY Obscurity is always bad. This article doesn't address this, but rather as an additional tool.
ie, I wouldn't use telnet to control all of my servers and then think i am being secure because the IP addresses are not put in a server-list.txt on each server.
However, using a secure protocol and not providing a network map on each server does provide a level of obscurity.
They along with the Type 1 development and certification process are used for stuff they trust the most. Suite B algorithms can be used with Type 1 implementation for very, critical stuff as well. The Type 1 is the key for assurance more than the algorithms themselves. It ensures the protocols and algorithms are rigorously implemented. Includes considerations on RNG's, common coding flaws, covert channel analysis, and TEMPEST shielding.
Only smartcard sector comes anywhere near assurance activities that go into Suite A or Type 1 products in terms of crypto.
There is a certain danger with obscurity in it can obscure to your own vulnerabilities. I believe this is most well-known with crypto, where essentially any proprietary crypto algorithm should be assumed to be broken (because crypto is just that hard), but it applies to all forms of security.
To put it another way: public (and popular) security measures have the benefit of having been validated by many eyes. When you choose proprietary security instead (which obsurity requires by definition, otherwise the technique would be known), you're betting that your own security team can do better. Is that a bet you can sometimes win? Perhaps, but I personally wouldn't want to bet too hard.
Agreed considering I know someone who web scraped a entire dataset because they were selling 2500 sources for $15,000 a year and the API request were sequential ids 0-2500....so just curl 2500 request and call it a day because you didn't hash your ids
By the way, hashing IDs would be a perfect example of security by obscurity if the hash was not salted. You just have to notice that you were given sha256("1234") instead of "1234".
The question isn't whether Obscurity provides security at all, but rather if the tradeoff you make with ease of use and maintenance is worth it.
From my experience obscurity is usually trivial for a determined attacker to overcome, and if it's not a determined attacker your normal security layer will probably be insurmountable to an attacker of opportunity.
For the small time (let's say an hour) that it'll add to a determined attacker's time you force your users to use an extra key just to detect the ssh port your using (the example he used in the article).
Can port knocking be used in the "real world"? In other words, would ~/.ssh/config or some other setting be able to automate the sequence?
I'm picturing some current workflows I use if port knocking was enabled.
In particular, unless there's a way to automate the knocking sequence, SSH'ing in via Ansible would be an issue and a SaaS we use to help with deployments would no longer work.
(Altho I'd imagine making firewall exception rules (e.g. "allow this ip address in") for these services would be a way around that).
For people saying security through obscurity should always be avoided, if you have a web service that is only required internally on the private network, would you make it public?
There's a difference between public, as in publicly accessible, and public as in externally visible. In your situation, you should not make it externally accessible, so it doesn't make a difference if it's externally visible or not (though there is no reason to make it externally visible).
I think this is something very unsafe to preach to people.
I'm not sure if this is a joke, or if this person and other comment authors are serious. The idea of Security-by-Obscurity is flawed inherently.
Let me first start by defining security: "the state of being free from danger or threat." Now this is definitely not the best definition, it's just the one that came up when I googled the word and so this will work for now.
The only way for security by obscurity to work, is for you to be able to design a system that is impossible to figure out or comprehend.
Let's assume that one was able to design a system that is incomprehensible to anyone. Let's initially ignore the fact that if the system is not understandable to the user, it couldn't have been invented in the first place.
I'll pose these questions:
- If the system is so obscure to foreign users, how will it be maintained
- If someone who knows the secrets of how this system is fired, what happens if they sell of their knowledge?
- What would happen if there turns out to be a bug in this massive amorphous blob of crap that no one understands? How do you start debugging it without invalidating it's "security"
I'll never use security-by-obscurity as a model. This is mainly due to one of my core beliefs: there are much smarter people out in the world then you. If you think "this is un-guessable" or "this is unbreakable" when slapping a bitshift on a stream of data and calling it "encryption" you need to understand that there are people smart enough in this world that can smell that from a mile away.
I've worked with some of these people, and before then I may have said "yea security by obscurity" is fine; but having worked along side people who are FAR more intelligent then I am. Anything I can think of to circumvent their actions can be trivially figured out by someone out there who is smarter then I am.
Great sentiments. I don't see security being discussed nearly enough in terms of risk and ROI. I usually see it discussed only in absolute terms, i.e. unless a solution fits the "CIA" model to a T, then it's unacceptable.
I think that we should layer the CIA triad on top of the Time-Cost-Quality triad when implementing application security.
Nice work. There are a lot of claims out there about this sort of thing, and not a lot of hard data.
If you had used a port other than 24 you would have seen 0 attempts. Port 24 is still scanned fairly frequently, something like 22X where X !=2 is almost never scanned. If you block port 22 and use iptables to redirect a random high port to port 22, you'll never see any connection attempts - unless someone is really targeting you.
I did a bunch of reporting/math against our data a few months ago the last times this came up on reddit. I found a few interesting things and tried to counter some bogus claims.
First bogus claim: Shodan scans everything, so you can't hide.
I checked for the entire month of feburary for a large address space, shodan scanned these ports:
2nd bogus claim: "A single host on a decent connection will be able to scan all ports on a /16 in less then an hour. I see scans like this all the time."
It is the same amount of work to scan every ipv4 address on port 22 as it is to scan every port on a /16.
Every port on a /16 is 2^32 (65536 ports on 65536 hosts) or 4294967296 ports.
Saying it can be done on a "decent" connection in less than an hour, at 4294967296 ports in 3600 seconds works out to 1,193,046 packets/second.
Line rate gigE maxes out at 1,488,095 pps. You would need to saturate a full gigabit for an hour to fully scan a /16 - and a full gigabit at whatever site you are scanning. with 0% packet loss.
3rd bogus claim: (re port knocking) Faster internet speeds means more scanning power which means that it gets easier to find your hidden service
If I am using a non cryptographic port knocking daemon with a 4 port knocking sequence, that has 65536^4 combinations. One should probably figure out what the length of the shorter De Bruijn sequence is for guessing 4 port sequences, which I believe just works out to around "only" 65536^4 packets instead of 4 times that. Also assuming the correct sequence would be found halfway through, that would mean 9223372036854775808 packets required to be sent.
At line rate gigE, that would take 6198107000463 seconds, or 196,540 years. Only 19,654 years at line rate 10gig though!
>you block port 22 and use iptables to redirect a random high port to port 22, you'll never see any connection attempts
This is a really bad idea, stick to lowports. Despite SSH doing host authentication, you really don't want non-root users being able to hijack the sshd port.
If they are using iptables to do the redirect instead of changing sshds port, wouldn't an attacker need a way to change/disable iptables, which also requires root? There shouldn't be a way for an application to put itself in front of iptables.
Ah! You're right, I didn't think that through. iptables is indeed a safe way to do this, however changing your port in the configs to a highport isn't.
As long as the goal of obscurity isn't to hide a weak or poor underlying security system, there might be some benefit to obscurity as an ADDITIONAL layer, as claimed.
I mean, I guess a password can be considered "security by obscurity", in the sense that it relies on keeping your password secret, the lack of knowledge about your password.
So, sure, obscurity is a valid security layer.
The problem is when people start thinking that port knocking is something different than a needlessly (emphasis on needlessly) complex method of implementing a very weak system-wide password.
Passive scans that hit your port 22 can hardly be called a security issue, and changing your port number definitely does not add any sort of security. This is a confusing concept, since changing your port number might decrease the probability your server gets taken advantage of (temporarily). It's a trivial fix for attackers everywhere to scan multiple ports instead of one at marginal cost.
I know some people who try to extend this thinking to software libraries. Ask yourself if the world would be better served if OpenSSL was closed-source and therefore could not be analyzed for bugs so easily by attackers. It's a slippery slope... but I see regular arguments why a particular piece of software is "technically" more secure by some level of obscurity (restricting access).
The other thing to consider is that obscurity makes threat detection easier. If your sshd_config is set up for some random port and you get 18K attempts, then it's pretty clear that someone is interested in your server. If you're running it on port 22, then a slew of login attempts could be difficult to disambiguate from regular portscanning weather.
It is only useful if you don't publish open source software.a new, underfunded open source project is not the best place to find top-notch security. All the code is published so there is no obscurity, either. It takes a long time for security to be worked out, and in the meantime, systems can be compromised by dedicated hackers.
Why is moving the SSH port Security by Obscurity? Its log cleaning by avoiding annoying portscans, nothing else..
There is NO Security gained by moving the port. If you think random portscans are a security risk to your SSH server you should seriously reconsider your SSH configuration.
You might not get compromised in the first wave of IP scanners that try the 0day on port 22, port 2222, and maybe a few other common alternatives. This first wave will pass in about 5 minutes.
Subsequent waves will just try every port. This isn't costly, they have an army of compromised machines from the first wave.
Can you give me an example of a real-world worm that has scanned every port on every system on the internet? I haven't seen one, but if it exists, I'd be curious to see it. I'm speaking based on the actual exploits I've seen in the wild over the past 15-20 years, not what is hypothetically possible.
Not every port, but maybe you remember this: http://arstechnica.com/security/2013/03/guerilla-researcher-... The article also details an earlier cataloging where a researcher probed 18 ports 3-4 times a day over the ipv4 address space. Tools like ZMap or MASSCAN make it easy for anyone to scan as many ports as they can, but I haven't heard of any worm that systematically tried all 65535 ports of all addresses. Though I would bet a lot of money that an OpenSSH 0day that bypassed all authentication would result in several such worms from multiple actors who already control hundreds of thousands of devices.
I don't think this is realistic at all. The first wave will hit 22. A second wave maybe 2222. Are you suggesting botnets will eventually scan 4^32 * 65536 using a scanner which performs multiple-packet connections?
Nope. SSHd running on port 26432 will likely never get hit; at a minimum it will buy you weeks to patch.
Where's your 4^32 coming from? There are only 2^32 ipv4 addresses. You may be right that the first wave would just target port 22 and the second 2222 and so on, an actual attack would probably have some interesting implementation details besides that too for pruning or host retry or something else.
Why do you think at minimum you would have weeks? Run the numbers for botnets of various sizes with a measly 10 Mbps network connection each, it doesn't look very good. Under normal circumstances yeah port 26432 is no more likely to be hit than any other high port, but an ssh 0day bypassing authentication is an incredibly valuable exception where now trying everything can be worth it for a little while.
Yeap I need to coffee before I try to math, not sure how I got to 4^32.
Thats an interesting point - I wonder if anyone has the numbers on how long it would take to poke each tcp port on each IPv4 address? or has done it?
One could argue that with an SSH 0day you'll infect 99% of hosts by hitting port 22 alone, and the 65000x effort required to find the others is of marginal return. A counter-point to that is that you may find some more interesting systems hanging out on other ports - less home routers and more boxes used by people who changed the default port as the kind of "hardening" procedure we're talking about in this thread.
The industry established response to the security risk of possible openssh 0days blown into the wild is moving the sshd port LOL
aka i don't care thats not a useful argument. Should that ever happen a lot of us would propably get rich on overtime payment, poor on free time and glad that we have good offline backups. One server pwned 5 mins later, or by pwning the data center..
Do you secure your servers against russian invasions?
Obfuscation has been one of my strongest measures for security for a long time. Cold War espionage writing taught me it's absolutely critical to defeating nation-state opponents given they'll always outsmart your specific, known techniques. What obfuscation does, if used effectively, is require the attacker to already have succeeded in some attack to even launch an attack. Defeating that paradox forces them to attack you in many ways, increasing work and exposure risk. The more obfuscation you have built in, the more that goes up. Very important moves to keep them effective are to ensure the obfuscation is invisible from users' or network perspective, make sure obfuscation itself doesn't negate key properties of security controls, make darned sure there are security controls rather than only obfuscation, only a few individual people knowing the obfuscations, and air gapped (or guarded) machines controlling them.
Here are some obfuscations I've used in practice with success, including against strong attackers, per monitoring results, third party tests, and occasional feedback from sysadmins that apply them or independently invented them:
1. Use non-x86 and non-ARM processor combined with strong Linux or BSD configuration that also advertises as x86 box. Leave no visible evidence you're buying non-x86 boxes. This can work for servers. Some did it with PPC Mac's after they got discontinued. This one trick has stopped so many code execution attempts for so long it's crazy. I really thought a clever shortcut would appear by now outside browser Javascript, memory leaks, or something. An expansion on it with FPGA's is randomized instruction sets with logs & fail-safe for significant, repeated failures.
2. Non-standard ports, names, whatever for about everything. Works best if you're not relying on commercial boxes that might assume specific ports and such. So, be careful there. This one, though, just keeps out riff raff. Combine it with strong HIDS and NIDS in case smarter attackers slip up. Don't rely on it for them, though.
3. Covert, port-knocking schemes. An example of a design I think I modified and deployed was SILENTKNOCK. It gives no evidence a port-knocking scheme is in use unless they have clear picture of network activity. Even still, they can't be sure how your traffic was authorized by looking at the packets. Modifications to that scheme that don't negate security properties and/or use of safety-enhanced languages/compilers can improve its effectiveness. My deployment strategy for this and guards was a box in front of the server that did it transparently. Lets you protect Windows services prone to 0-days. Think it stopped an SSH attack or something on Linux once. Can't recall. Very flexible. Can be improved if combined with IP-level tunneling protocol machine-to-machine in intranet. Which can also be obfuscated.
4. Use of unpopular, but well-coded, software for key servers or apps. I especially did this for mail, DNS, web servers, and so on. Black hat economics means they usually focus on what brings them the most hacks for least time investment. This obfuscation counters their economic incentive by making them invest in attacking a niche market with almost no uptake. Works on desktops, too, where I recommended alternative Office suits, PDF readers, browsers, and so on that had at least same quality but not likely same 0-days as what was getting hit.
5. Security via Diversity. This builds on 4 where you combine economics and technology to force black hats to turn a general, one-size-fits-all hack into a targeted attack specifically for you. You might choose among safe libraries, languages, protocols, whatever without advertising their use in the critical app or service. Additionally, there's work in CompSci on compilers that automatically transform your code into equivalent, but slightly different, code with different probabilities of exploits due to different internal structure. That's not mature, yet, imho. You could say all the randomization schemes in things like OpenBSD and grsecurity fit into this too. Those are more mature & field-tested. If Googling, the key words for CompSci research here are "moving target," "security," "diversity," and "obfuscation" in various combinations.
6. My old, polymorphic crypto uses obfuscation. The strongest version combined three AES candidates in counter mode in layers. The candidates, their order, the counters, and of course the keys/nonces were randomized with exception being same one couldn't be used twice. That came from only criticism I got with evidence: DES meet in middle. FPGA's got good at accelerating specific algorithms. So, I modified it to allow weaker ciphers like IDEA or Blowfish in middle layer but no less than one AES candidate in evaluated configuration and implementation preferrably on outer layer. Prefferably two AES + 1 non-AES for computational complexity. All kinds of crypto people griped about this but never posted a single attack against such a scheme. Whereas, I provably stop one-size-fits-all attacks on crypto by layering several randomly with at least one strong one. Later, I saw TripleSec do a tiny subset of it with some praise. Also convinced Markus Ottella of Tinfoil Chat to create a non-OTP variant using a polycipher. He incorporated that plus our covert-channel mitigations to prevent traffic analysis. Fixed-size, fixed-transmission is obfuscation that does that I learned from high-security, military stuff.
7. Last one, inspired by recent research, is to use any SW or HW improvements from academia that have been robustly coded and evaluated. These usually make your system immune to common attacks [2], mitigate unauthorized information flows [3], create minimal TCB's [4] [5], use crypto to protect key ops [6], or obfuscate the crap out of everything [7]. I mainly recommend 1-6, though. ;) Then, don't advertise which ones you use. Also, I encourage FOSS developers to build on any that have been open-sourced to get them into better shape and quality than academics leave them. Academics tend to jump from project to project. They deserve the effort of making something production-quality if they designed a practical approach and kindly FOSS'd the demo for us.
The recommendation to use rarely employed software is a nice example of a trap of obscurity. Enough eyes does indeed make bugs shallow.
How do you decide that a piece of software is "well-written"? Even very strict review processes missed critical issues... and none of those are team for rare software.
Instead, the proper advice is to reduce attack surface, not rely on some allegedly obscure piece of software.
Using results from academia is often supremely impractical. The actual software and designs are often unavailable or perhaps impossible, very definitely unverified.
Your homegrown CTR crypto might expose you for a related key attack. How do you know, since nobody targets it, until it is broken?
Guessing target architecture given an exploitable but is trivial. You need a simplest of data leaks. There aren't many options available either. You can use x86, MIPS or arm, the latter two in big endian or little endian. Other much more unlikely targets can be POWER or ia64.
Custom micros can be PIC or MCP51, maybe Atmel.
That gives only a bunch of options to try out.
You already have to tangle with much stronger security measures such as ASLR or NX. Or various operating systems.
It is an obfuscation. It tries to obscure where something will be rather than directly prevent or detect the attack as strong controls do. Useful, as other obfuscations are, to add speedbumps in for the attacker while preserving some level of compatibility with existing code or performance that strong security might sacrifice. That an obfuscation was recommended in a counterpoint against obfuscations was most interesting. :)
Common refrain that things like OpenSSL counter nicely. Massive use, open code, and basically no code quality. Instead, all of them seem to have bugs and exploits available regularly throughout the year. You'd think using an obscure program would make things worse but software's baseline is so low it's often not the case. Instead, the high-quality stuff in both closed and open source seems to be made by skilled developers that care about quality. Only common denominator. Note, though, my recommendations are to use obfuscation on this with good, security controls in place plus well-coded apps. On to that.
"How do you decide that a piece of software is "well-written"? Even very strict review processes missed critical issues... and none of those are team for rare software."
For binaries, you can look at the software itself and how it runs to get a decent handle on how well it's written. Classic example is Bittorrent vs uTorrent. uTorrent had amazing amount of functions in tiny executable that loaded and ran fast without any issues for me. Clearly a great programmer wrote it. Adobe Reader vs Foxit has similar properties suggesting people at Foxit at least care about quality of their code and make a good attempt. You can also apply tech that isolates or hardens the excecutables themselves to care less about it. Nonetheless, had I chose Reader, the many eyeballs effect would've meant I'd have a new attack on my hands about every month (or more) whereas Foxit users weren't being affected as much. Tide might have shifted where it got popular enough & secretly had terrible enough code that CVE's are popping up all over the place. I haven't followed Foxit since I've been off Windows. I'd ditch it at that point if it got that bad. It will have done its job for a long time in reducing my headaches vs Adobe, though.
Now, for FOSS or even shared-source proprietary, you have even more options. The first is simply looking at the code to see if it's well-organized, has proper checks, has tests, and so on. For PDF readers, I remember Marc Espie telling me MuPDF had really, good code. Great coders like him know good code when they see it. More objective metrics might be found by running static analysis tools or something on a bunch of them to see how many warnings or issues show up. Lower the better. Also, remember my methods don't work in isolation: picked an obscure PDF reader, ensured it had noticeably better quality than most-targeted one, and with source may applied a transformation that added more checks and stuff.
"the proper advice is to reduce attack surface, not rely on some allegedly obscure piece of software."
The proper advice is to do both. Each technique I listed reduced issues either arguably or in the field. Testing it was easy: honeypots with known vulnerable stuff. Attacks didn't work. Obfuscation by itself had great benefit. So, combining strong security such as separation kernels, safer languages, static analysis, and so on with such obfuscation just makes strong security even stronger.
"Your homegrown CTR crypto might expose you for a related key attack. How do you know, since nobody targets it, until it is broken?"
Do you have references on this? Let's focus on the strong one to prove or disprove the principle: using one to three AES candidates with separate keys each in the way it's supposed to be used. My only modifications to AES candidate in isolation are using one after the other, randomizing the initial value of the counter, and changing the order of cipher application per session. So, it's AES-256/CTR + Twofish-256/CTR + Serpent-256/CTR in order or some other combo of them. Keys and counters come from a CRNG or TRNG. No repetition is allowed such as Twofish followed by Twofish in case theres internal interactions. Wheres the vulnerability in this construction that can be exploited by someone possessing only ciphertext?
"Guessing target architecture given an exploitable but is trivial. You need a simplest of data leaks."
It is but they rarely do. Field results show they just basically never do. It has to be a highly-determined, smart attacker that also knows odd ISA's. That's why I have other things waiting for them. Randomized instruction sets generated per application with enforcement doing with modified CPU's is the state of the art in this IIRC. I recommend stronger methods like in Hardbound or CHERI architecture that arguably immunize you to specific issues. Yet, CPU obfuscation often blocks riff raff and sometimes smart attackers constrained by time or boredom.
"There aren't many options available either."
That's true. Yet, how many Super-H coders do you know in malware market? Or even Alpha? Economics and laziness are why it still helps despite limited options.
"You already have to tangle with much stronger security measures such as ASLR or NX. Or various operating systems."
ASLR and various operating systems doing such things are in my list. We totally agree on them as nice speed-bumps for attackers. Also recommended some things that do full-safety or microkernel with small TCB + resource isolation. Even stronger. Personally, I'd probably add obfuscations like ASLR to them, too.
Agree that changing SSH ports on public facing servers is a great use case for obscurity but I hate when "security minded" people use that inside private data centers which just makes scp/rsync etc. very annoying without adding the same security value.
I'm not even sure if I agree or not. I get the point, but I kinda want random attack bots to try logging into my sshd with common passwords. It's like a free security scan!
Of course. But in case something is seriously wrong, you probably want the compromise to happen on an isolated system and not main production one or one containing critical data.
Separation of concerns and compartmentalization are both very good things from security point of view.
Moving the SSH port if you are already using port knocking with SPA is kinda useless, nevertheless I agree with the premise of the article, I really like the M1 analogy.
Yeah go look on security stackexchange and you'll see it quite a bit, people just cargo cult the idea of obscurity == bad and don't consider the points made in OP's article
I don't think it's that bad, but if you believe that security through obscurity is not secure, and you are using it with something that is secure, then it is in practice adding nothing, and not worth it.
Somewhat differently, sometimes security by obscurity is confused with using a proper key. For example, with most webservers you can pretty safely restrict access to a file by giving it a filename containing some key that cannot realistically be guessed. Barring other ways in which an attacker could find out about the presence of the file – which is quite easy to fuckup in some environments, and is why you shouldn't do this if you really want to be secure – someone without the key cannot access the file, and thus it shouldn't be called security by obscurity; the method is public, the key is not.