Hacker News new | past | comments | ask | show | jobs | submit login
Critique My Plan: API Key for Authentication (dev.to)
77 points by rbanffy on Dec 26, 2017 | hide | past | favorite | 73 comments



This developer has pretty much arrived at the right answers by themselves.

Key Generation

generate-safe-id is fine. For a secret API key, UUIDs are less fine. Also fine: directly reading 16-32 bytes from crypto.getRandomBytes and converting to hex.

Key vs Secret

An AWS_Access_Key is basically just a random username. If you understand how userids work in your system, you don't have to worry about this.

Key Usage

Don't use JWT; JWT is a mess. The real distinction here isn't what a secret key can get you, but whether you attempt to track clients without serverside state. What JWT ostensibly gets you is an arbitrary per-client state store without requiring database lookups on the serverside. In reality, you almost never get that, and inherit all the downsides of JWT --- including extreme, scary complexity --- for very little practical benefit. If you're already using a database to handle client request (like most applications), keep it simple.

Key Storage (Clientside)

Cookies are safer than localStorage; modern browsers are bristling with defenses for cookies, but not for localStorage access.

Key Storage (Serverside)

We hash passwords because passwords are valuable across sites; it's a big deal to compromise someone's password, even on a random low-value application. That's not true of API keys. If your database is compromised, the API keys don't matter anymore. Don't bother encrypting them.

Authorization

Authorization is a big topic. API keys are an authentication concern. AuthN != AuthZ. Keep it simple with AuthN.

Don't:

* Bother with TLS client certificates. Client certs are useful internally, when you have an ensemble of services that need to talk to each other securely. They're much less useful when you have tens of thousands of semi-anonymous clients. They're a huge pain in the ass to deal with on both the client and the server side, and have extremely bad UX.

* Use JWT or Macaroons. If you can get away with opaque random strings and a serverside database, then get away with that, for as long as you possibly can. We work on applications running at "popular Internet app" scales and they manage this just fine. Trying to push client state to the client costs security; it's something you get away with, not something you do to shore up your defenses.

* Bother with OAuth. The only "interesting" thing OAuth provides is a UX to allow you to delegate authentication to third-party services --- think, every app that wants to see your Twitter timeline. If you don't have that problem, OAuth doesn't buy you anything except complexity.


Advocate-of-the-deviling re: don't bother encrypting them. Your argument is sound (the reasons why password encryption matters mostly don't apply to API keys), but there is a subtle advantage here: the amount of time it takes you to detect compromise isn't zero. Got a backup on a public S3 bucket? That's bad, but now at least you know you don't have to audit every user action, too. Since they're already high-entropy they can't be enumerated so you don't have to use an expensive KDF like scrypt or bcrypt, and you can get away with just a hash or a regular KDF. (Doesn't hurt to use scrypt though.)

Just to keep myself honest: I'm aware that HMACing with the API key (a suggestion I defended in a different comment) and storing the API key with a KDF mutually exclusive and that might seem like I'm giving contradictory advice. My specific recommendation is still to just use API keys stored plaintext server side just like 'tptacek is. I'm just saying that these alternative suggestions aren't silly.

TL;DR: plaintext API keys are fine and you should use them but you're not a bad person for wanting to hash them :)


I would think that hashing the API keys for storage, like a password, is worth it. It's extremely cheap in both development and CPU time because you can just use SHA-256 (high entropy is sufficient to deter brute force), and avoids unauthorized accesses in case of a database leak.


If you've lost the database, you've lost accountability for all user actions anyways. Everyone should be forced to reset their credentials.

Meanwhile: storing authenticators of API keys rather than the keys themselves precludes you from doing "request signing" with the keys in the future, since validating a MAC will require the preimage of the hash.

Generally: if you're concerned about survivability of systems after compromise, a better strategy for storing API secrets would be to compartmentalize them in a simple authentication server with a tiny attack surface (you could implement it with minimal state by doing a challenge-response protocol internally, so that the only interface exposed by the authentication server would be "does this message have a valid MAC").


> If you've lost the database, you've lost accountability for all user actions anyways.

Not sure I understand why. Is it because the (hashed) passwords are there too, or did you have an attack against the API keys in mind? Just to be clear, I'm thinking of an unsecured backup scenario, not full blown database compromise.

Regarding signing, that's a good point I hadn't considered, and maybe reason enough not to hash the API keys.


In my mental model, you lose the database to appsec flaws, not to opsec flaws with backups. Both things definitely can happen! I'd suggest though that losing your entire (auth) database to a compromised backup is a "reset all the API keys" moment anyways.

Remember: these services that provide API keys are invariably backed with some kind of login service that uses passwords. Even if you're hashing with scrypt, if you lose the password hashes, everyone's resetting creds.


Isn’t the issue the time when you’ve lost your dB and the time you realize it? During that time keys could be used.


> > Key vs Secret

> An AWS_Access_Key is basically just a random username. If you understand how userids work in your system, you don't have to worry about this.

The big difference is actually that the aws secret key is never sent to the server, but rather used as the secret in an HMAC.

That means that if I make a request to S3 with a given secret key, S3 never has access to that secret key (rather it takes my HMAC'd signature, passes it on to the IAM service internally, and gets back a 'yes' or 'no' for if it's correct).

That means that S3's logs can't contain sensitive information (the HMAC includes a time, so it can't even be replayed after a short while), any networking dumping I do doens't contain the secret key, etc.

This is a useful property in general.


This is true of lots of API secret schemes that don't have an opaque random key ID.

If your secrets are opaque random tokens that only have meaning as an assertion to your servers, it's worth asking yourself what value you'd derive by trying to hide them from server components. And, obviously, something on the serverside will need enough information (almost always: the secret itself) to verify the message.


Advocate-of-the-devling re: "something still needs the secret" (assuming you're symmetrically MAC'ing, not asymmetrically signing): yep, but that can be a tiny, separate component and that might be better than everything seeing the plaintext secret.

I wouldn't recommend such a scheme by default, because now e.g. your API clients are way hairier to write (e.g. serialization, possibly canonicalization) and it's pretty easy to mess this up (AWS did in their first attempt). But if you have resources to do it well, that is: audit the heck out of the protocol and the tiny component that verifies the MACs, it does prevent some problems mentioned such as unintentional disclosure of the key on the server side, replayability of requests. I don't think those are super valuable properties, but it's also not intrinsically a hare-brained idea.

TL;DR: API keys are fine and you should use them but you're not a bad person for wanting to HMAC things :)


Also important to remember is that even when someone thinks JWT will allow avoiding database lookups they eventually still add them for one reason or another (e.g. for instantaneous token revocation).

Macaroons look nice and simple on the surface but have many edge cases that are not at all supported by the existing libraries or have bad/insecure defaults.

For random keys it's good to remember that plain lookup in the database is prone to time side channel attacks, see https://paragonie.com/blog/2017/02/split-tokens-token-based-...


Agree but be sure to configure the cookie correctly. Set it to http-only, "secure" (only transmit over https) and set the domain/path correctly so only your server/application can see it.


Use the "__Host-" name prefix. [0]

This prevents cookies from being set at the root domain by a subdomain. Which in turn prevents non-TLS MITM cookie injection [1] as long as the application uses HSTS (even without includeSubdomains from the root).

[0] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Se...

[1] https://www.usenix.org/conference/usenixsecurity15/technical...


I like seeing the "you're struggling toward reinventing JWT; consider just using JWT" discussion play out in the comments. (Spoiler: the dev ends up deciding to just use JWT.)


JWT is a great idea, but in practice the actual specification is broken in dumb ways. You may as well lob off the entire header portion of the token since there's no sense in listening to what it says.


It is broken but at the same time, if you don't send the header, you're not using a real JWT according to RFC 7519.

If you're going to use JWTs - even with all their flaws - please use them properly.


Sure, but if I'm not actually setting out to reinvent auth, I'm still better off going with a suboptimal but basically functional standard like JWT than I am trying to roll my own.


If you would've used this logic just a little while ago and slapped an off-the-shelf JWT library into your app, you'd have a trivially exploitable design flaw that could be used to easily forge tokens.

In that case, reinventing the wheel properly probably would've been more secure, if not perfect.

I personally use JWT, but unfortunately like so many other things you can't just drop it in and forget; you have to have some level of care if you really want it to work. All cryptography has its caveats.

OTOH, going with simple, server-issued tokens that don't carry data actually is pretty proven already, so it's not really that bad of an idea anyways, if you are only ever going to have a few servers in the same region anyways.


All else equal, I'm more likely to find out about a dumb gotcha in a commonly used standard before it bites me than I am in my own sui generis design that nobody else uses. You're right that crypto of any sort requires care in how we use it, but preferring a more widely used method lets us leverage the care of others as well as our own. And if we don't bother, we're probably about as hosed either way.

And, yeah, if it's practical to just generate and issue a secret from the server side and authenticate it at a single point when the client presents it back to us, we may as well do that. I'm not sure how practicable an option that is in OP's case, though; the description of the problem suggests there's something rather strange going on, and I don't grasp it well enough to speak to what scheme is really most useful in the context. But it does sound like having the client bear its own claims is useful, and JWT would be the obvious go-to for the use case.


What terrible standard could you ever avoid given this logic? I could just as easily convince you to use raw XML DSIG with this argument; after all, in the dichotomy you propose, your only alternative would be a DIY reimplementation of XML DSIG. Why not find a way to fit IKE into your design as well? What, are you going to redesign your own IKE?


Of course this argument is rather useless when taken as the sole metric by which to decide whether and which standards to use, or implementations to prefer. But why would anyone do such a nonsensical thing as that?


Since it appears to be the only metric provided, I think I can turn that question back around on you.


What does "all else equal" mean to you?


Yeah, JWT, like YAML, was one of those dead-simple, great ideas that ended up overcomplicated and error-prone.


There’s so much FUD around JWT that I feel compelled to answer this everytime it comes up.

Just to get it out of the way, you’re correct that the JWT spec doesn’t do a lot to prevent implementors from doing stupid things, particularly bad was downgrade attacks from asymmetric to symmetric keys.

Here’s the thing, these issues don’t exist anymore. Amazon’s Cognito relies heavily on JWT and I tend to trust Amazon’s security folks.


"Amazon uses JWT" and "Amazon knows what they're doing" are not unreasonable statements, but it doesn't follow that "therefore JWT is now fine for everyone".

You can make that argument for a lot of different footgun specs. I'm not saying JWT implies broken. I'm saying that JWT plus sub-billion-dollar-company-security-budget often leads to disaster. I'm also saying it's an unforced error, in that people do that in order to solve a problem they most likely don't have. It's clear that some people can do JWT correctly, just like e.g. some people can do OAuth2 correctly (to add another example to your list: Google/GSuite-as-an-IdP). That doesn't make it a safe and well-designed spec, and it doesn't make it great general advice.


This is not FUD. It is a design flaw.

It is now widely recognized that the header component of the JWT token cannot be used except to reject a token, making it pretty close to absolutely useless (except for debugging.)

HOWEVER. Before this was recognized, most JWT implementations were broken and were easily susceptible to the most basic of downgrade attacks.

Downgrading from asymmetric to symmetric was a slap in the face for obvious reasons, but that's just the last round of problems. The early problems were even more ridiculous; many libraries would readily downgrade to 'none' and turn off protection altogether.

Not just Amazon Cognito, but Google's OIDC implementation, and indeed, all OIDC implementations, use JWT. JWT implementations today are hopefully no longer susceptible to basic lapses in security.

But there's good reason for its reputation: the way it's designed lead to these trivial downgrade attacks. Having the token specify the algorithm was a bad decision, and it lead to bad implementations. Importantly, someone implementing JWT today could easily make some of the same mistakes if they aren't careful.

I hope a future JWT release entirely removes the information from the token and just forces the client/server to agree statically.


I agree strongly that JWT is badly flawed, and that criticisms of JWT don't constitute "FUD". JWT's uptake has been alarming given how little cryptographic engineering input the format seems to have received, versus how complex it is under the hood.

That said:

Asymmetric crypto is a crypto code smell. You use it when you absolutely have to because there's no other way to express what you're trying to accomplish. It is much harder to get public key crypto right than it is to safely use a "Seal/Unseal" AEAD interface. One of the things that alarms me about JWT is that it's a format that presumes developers might want to effortlessly switch between symmetric and asymmetric crypto, as if they were just two different ways of solving the same underlying problem.


Amazon are the same folks who store their infra status page on their own infra, which is fine when it's up, and pretty useless when it's down. Even the smartest people are still just people - it's worth trusting several groups of smart people over just one.


I find it hard to recommend JWT unless you're already a security expert. This is unfortunate, since JWT was meant to bring client-side tokens to the masses, but the reality is that:

1. JWT has amazing cross-language support, but most of the libraries out there are unaudited, and even the ones that are audited can change and, let's say, make the 'None' signer the default one: https://github.com/lcobucci/jwt/commit/6507ac39be5a5e06457c8...

2. JWT can be used somewhat responsibly, but the JWT spec does n't enforce any security measure and doesn't even contain proper recommendations like:

a. Never use the None algorithm. b. Never use the RSA algorithm. c. Basically, only use HMAC (or Ed25519 if supported). d. Always include an issue timestamp ('iat' claim) and possibly a unique nonce ('tid') to make tokens unique. e. Always do key rotation using 'kid'. f. Implement a blacklist-based token revocation mechanism (or use short-lived access tokens backed by DB-based refresh tokens). g. Include the token expiry claim! Seriously, JWT doesn't even require token expiry! h. Don't use JWE. h. Associate your keys with signature algorithms, and _make sure that token is signed with the right alorithm_: https://auth0.com/blog/critical-vulnerabilities-in-json-web-...

3. I'm yet to encounter a library which enforces good defaults on your JWT usage. Some libraries validate expiry by default (if the claim exists), but from my experience most of them don't set it by default. Almost no library contains an convenience method for setting the issue time or a nonce, let alone setting them by default.

4. No JWT library I know of provides help for implementing basic features like key rotation/revocation and token revocation. Without these features JWT security model becomes vastly inferior to database-backed tokens.

The reality is that developers implementing JWT choose it exactly because they don't want to delve into all these complications of using tokens, so they just pick up a library and hack something around it, expecting the JWT library to take care of all the security issues for them.


What's treacherous about JWT is that the programming interface is extremely simple and pretty closely approaches the ideal of what developers would want from secure clientside storage. It's a great interface, but terrible crypto.

https://news.ycombinator.com/item?id=14292223


You're right! You need a strong engineering culture around its use to avoid its pitfalls, and without that, the results are often regrettable. And I agree, too, that this is a failing of the standard, not of those who use it.

But it's also true that JWT has legitimate use cases; I'm working with it right now in my professional capacity, and we are using it precisely because "twenty bytes of hexified /dev/urandom" isn't a terribly efficient option at our scale - we could do it, but we won't, because it doesn't add value to the business.

As you say, though, the crypto sucks. You know a great deal, both in the absolute and relative to my own knowledge, about such things. What, in your estimation, is the means most likely to produce an outcome where we have something of roughly similar utility and ease of use, but with solid crypto and few or no bear traps for the unwary engineer? (Beyond just removing the algorithm negotiation, or at least the "none" option.) Or does such a thing exist and I'm just one of today's lucky 10,000?


You can start with running your JSON data (or whatever serialization format that you choose) through NaCl/libsodium's crypto_secretbox() and you already get more than what JWT offers by default.

It still doesn't include everything you need of course: at the very least you'll still need to implement expiry validation and some token revocation mechanism. But you actually get more value out of the box than you get with a JWT library: your token content is encrypted, your using a solid crypto library and you don't have to mess around with any knob to secure your implementation.

It doesn't offer support for hot-swapping crypto algorithms, but for what it's worth, cryptographic agility is the root cause of many issues we had with TLS in the past[1].

If you end up needing both symmetric and asymmetric cryptography in your tokens, you better treat them as two different types of tokens, because they usually are. I think tptacek already said, but asymmetric crypto is rarely interchangeable with symmetric crypto - you usually use them in very different cases.

[1] https://www.imperialviolet.org/2016/05/16/agility.html


Rails applications have been doing stateless auth in encrypted cookies for over a decade, and the constructions they use to do it are safer than JWT's, and orders of magnitude less complex. I would want to push back on the idea that a good use case for JWT is "we need to do stateless auth".

Stateless auth is a bad place to start. But, when you need it, there are better ways to do it than JWT.


I wish the author went deeper into the topic. For example JWTs have a pretty straightforward structure and if you use just a subset of the spec they look like a clean and simple solution. But did the author check Macaroons? Did they consider SSL Client Certificates?


What is the game?


I'd think it's agar.io or slither.io?


What is JWT?


The first Google search result for "jwt" is https://jwt.io/ which explains it pretty well for you. https://en.wikipedia.org/wiki/JSON_Web_Token is pretty good, too.


[dead]


You can automatically apply policies such as expiration and manual rotation to API keys/tokens. You can’t really do that seamlessly for username/password combinations.


migth want to check out https://github.com/yahoo/athenz


An auth/security lib from the company that leaked a billion users PII?


First off, I'm not an expert at security, but I do have a background in it and in development of microservices. So fair warning.

API keys provide reasonably good security, but there is a reason why companies have moved to OAuth. Using https, your payloads are protected by SSL, but the URI and the request and response headers aren't. That means no matter how you intend to pass along your API key, it is in the open and thus can be intercepted.

So, bad, right? Well, maybe not. If whatever you are securing with an API key is not that critical to anyone and you just want to prevent outsiders from calling your API or calling it repeatedly and abusing it, then API keys are fine. I would make it hard to guess, but I wouldn't go crazy with that, either, because everything is "easy to guess" for anyone who is really working hard at gaining access. If you are just working on a game, I would do the basics to secure your services and then move on to the more critical matters.


You appear to be shadowbanned for no immediately obvious reason. I vouched this comment to point out that it contains a factual error, to wit, the claim that the headers and URI of a request made via TLS are sent in the clear. They are not; the socket is encrypted before any protocol messages are exchanged. Perhaps you're thinking of SMTP STARTTLS?

Companies have moved to OAuth, and to JWT as the payload, because it provides a lightweight, shared-little authentication/authorization framework that's well suited to the needs of a distributed microservice architecture and to third-party service integration - not because it overcomes a nonexistent problem with HTTPS not encrypting headers.


A properly configured SMTP STARTTLS would not really send anything substantial in cleartext either, it would issue the STARTTLS command as the first command and the rest would be wrapped in encrypted TLS. You can think of the unencrypted STARTTLS preamble as a set of static ascii bytes that are the same for every connection.


It's the closest common parallel I can find for what the prior commenter is describing, but yeah, it isn't really all that close. I've seen poor implementations leak, but not for many years now.


I don't think the op had something else in mind. IME it's a strangely widespread misunderstanding that https encrypts just parts of the http request. I think it's because of the advice to not put sensitive data in urls. The reason for that is due to logging at the endpoint, but it's misunderstood that it's not encrypted. It's funny because it's so easy to test the hypothesis, and it's hard to even propose a reason it would be that way.


Headers and URI are encrypyed aswell, in a HTTPS request. Only the domain is exposed (when SNI is supported)


Not trying to be pendantic but the entire hostname is exposed, not just the domain name. I’m assuming that’s what you meant.


The hostname is a domain name. Some domain names may not be hostnames, but all hostnames are domain names. So the entire domain name, which is also a hostname, is exposed.


These definitions are highly context-dependent, though.

Even more pedantic, but sometimes surprising: SNI reveals the full text of the final lookup query that the requester used to obtain an IP address to open a TCP connection to the server.

Neither the text nor the address are necessarily "correct", and the text might be formally a nodename, a hostname, a domain name, a fully-qualified domain name, or a text representation of an IP address (which, again, is not necessarily correct).

In practice, certificate authorities constrain the possibilities of the lookup text (for a successful connection using the CA-signed cert), but that is not a technical limitation. And of course, a self-signed certificate has no such constraints.

With cooperation between the server owner and users, an SNI-sensitive publisher could make their site available at https://fbi.gov/. But it's probably easier just to use a meaningless domain instead. :)


> Neither the text nor the address are necessarily "correct", and the text might be formally a nodename, a hostname, a domain name, a fully-qualified domain name, or a text representation of an IP address (which, again, is not necessarily correct).

Please don't make claims about standards that you haven't read. SNI supports only DNS FQDNs.


You are incorrrect.

That's not quite what the spec says, and furthermore it is not true in practice.

Giving you the benefit of the doubt, I reread the spec, and I tested in nginx.

Works fine.


What spec?

RFC6066 (https://tools.ietf.org/html/rfc6066#section-3) says "Currently, the only server names supported are DNS hostnames; however, this does not imply any dependency of TLS on DNS, and other name types may be added in the future (by an RFC that updates this document)." [snip] ""HostName" contains the fully qualified DNS hostname of the server, as understood by the client." [snip] "Literal IPv4 and IPv6 addresses are not permitted in "HostName"."


RFC6066, correct.

The "as understood by the client" is very important, apparently.

Furthermore, the name does not need to be in DNS. There is no single source of true DNS anyway (though of course the ~whole world uses the same root servers).

Try it. I just set up a remote server for "snitest", added the IP address to /etc/hosts only, generated a cert for that name only, and got the correct cert via SNI in a local browser.

The same process (minus /etc/hosts modification) also worked for a bare (textual representation of an) IP address.


> Furthermore, the name does not need to be in DNS. There is no single source of true DNS anyway (though of course the ~whole world uses the same root servers).

That is not "furthermore", that is what "as understood by the client" means.

> Try it. I just set up a remote server for "snitest", added the IP address to /etc/hosts only, generated a cert for that name only, and got the correct cert via SNI in a local browser.

That is a fully qualified DNS hostname, by definition. You told both your server that it was a DNS FQDN it should serve and your client that it was a DNS FQDN that it should request, and so they obviously were matched exactly as required by the standard. Whether DNS is involved in the resolution process is irrelevant, especially so given that the standard nowhere specifies the DNS root to use.

> The same process (minus /etc/hosts modification) also worked for a bare (textual representation of an) IP address.

What was the name type in the certificate? That sounds like a bug in the client, and possibly in the server as well.


Is your argument that a name can be a DNS FQDN even if it is not in DNS?

A non-DNS hostname can be treated as fully-qualified since there's no organizational structure to qualify it in.

> What was the name type in the certificate?

The CN? Is the IP address, as specified at cert generation.

> That sounds like a bug in the client, and possibly in the server as well.

Perhaps. I'm not a standards adjudicator.

Nevertheless, it works in the web we have. Tested in nginx, OpenSSL, curl, Firefox.


> Is your argument that a name can be a DNS FQDN even if it is not in DNS?

In the sense of the SNI spec, yes. It would be pointless to require the name to be "in DNS", for the simple reason that there is a potentially infinite number of DNS roots, and protocol standards naturally don't specify instances, only mechanism, so any TLS client could just implement a DNS server that uses the hosts file as its data source for a private DNS root and uses that for resolution, which would be indistinguishable from an implementation that simply skips the pointless encoding and decoding of DNS messages.

The point is to (a) distinguish host names from IP addresses, (b) require fully qualified names, (c) specify the syntax of the host names. Whether the names are actually in any DNS is irrelevant.

> A non-DNS hostname can be treated as fully-qualified since there's no organizational structure to qualify it in.

Sure, and that is fine. The point of the FQDN requirement is that if you send "foobar" as the SNI hostname, you cannot expect the server to match it to a certificate for "foobar.example.com". If both server and client agree on a namespace(/DNS root) where "foobar" is a fully qualified name, then you can expect everything to work just fine. It's not a requirement on how the name is to be verified, only a guarantee on how it may be transformed by protocol participants.

> The CN? Is the IP address, as specified at cert generation.

The CN is essentially deprecated, but matching an IP address in SNI against the CN should be fine. I am just not aware of any way to encode an IP address in SNI. If you had an alternate name IP address in the cert, matching an SNI host name against that would be a bug. If the browser actually accepted a certificate for an IP address when matching against a hostname, that would be a vulnerability.

> Nevertheless, it works in the web we have. Tested in nginx, OpenSSL, curl, Firefox.

Nah, if stuff that contradicts the standard happens, that means that there are things that the standard guarantees to work, but that are broken with the non-compliant implementation. Whether that buggy behaviour is useful to you in some way doesn't change that it's not "working" in any meaningful sense.

Oh, and if by OpenSSL you mean the command line tools: Yes, with those it does work correctly. If you specify the SNI name 1.2.3.4, that is the DNS hostname 1.2.3.4, which should indeed be correctly matched by the server against any certs for alternate DNS name 1.2.3.4, or possibly against certs for CN 1.2.3.4, but not certs for alternate name IP address 1.2.3.4.


> Whether that buggy behaviour is useful to you in some way doesn't change that it's not "working" in any meaningful sense.

I sense that you mean "working" as according to spec. I mean "working" as in practice. In my line of work, the latter is the only meaningful sense.

FWIW, this is not new behaviour, it has worked for at least a decade in various applications.

And yes, OpenSSL the command line tools. I'm not sure what you're talking about though, I've never seen a cert tied to an IP address. What representation would be encoded? My original assertion does not require any such thing.

I'm really not sure what we are disagreeing about any more. If you go back to my original comment, I don't think you'll find it controversial.


> I sense that you mean "working" as according to spec. I mean "working" as in practice. In my line of work, the latter is the only meaningful sense.

The idea that something that isn't working according to spec is supposedly working in practice is almost always an illusion.

> FWIW, this is not new behaviour, it has worked for at least a decade in various applications.

Well, yeah, buggy software is nothing new, that's true.

> And yes, OpenSSL the command line tools. I'm not sure what you're talking about though, I've never seen a cert tied to an IP address. What representation would be encoded?

You could put it in dotted-quad syntax into the CN field of the DN of the certificate's subject (I'm not sure that was ever explicitly allowed, but that was commonly implemented and that stuff is a mess anyhow, which is part of why alternative names were invented--and, as I mentioned, the CN is deprecated and at least Chrome nowadays ignores it unconditionally). The modern, correct way to encode an IP address as the subject of an X.509 certificate is in a subject alternative name field of type iPAddress, as specified here, 5th paragraph:

https://tools.ietf.org/html/rfc5280#page-36

Really, it's probably a stupid idea to have certificates for IP addresses, which is probably the reason why SNI does not support IP addresses ... but PKIX does nevertheless specify certificates for IP addresses.

> I'm really not sure what we are disagreeing about any more. If you go back to my original comment, I don't think you'll find it controversial.

Your claim that you could specify an IP address in SNI is still wrong. As for the other options, it depends on the exact definitions of those terms, they are probably partially wrong. You can not have anything but a fully qualified DNS hostname in SNI, by definition. Whatever you put into that field is by spec to be interpreted by the server as a DNS FQDN, as is the matching SAN field in candidate certificates, and the client must not cast an IP address specified by the user to be reinterpreted as a host name (or else, the server might respond with the certificate for DNS:*.4 when the user has requested https://1.2.3.4/, instead of the default certificate for IP:1.2.3.4 that would be returned when no SNI is present, thus leading to a connection failure even though the spec guarantees that the connection will work).


> The idea that something that isn't working according to spec is supposedly working in practice is almost always an illusion.

They're obviously different things. The what that is working is not necessarily the what that the spec writers intended or imagined.

I work in network and application security. A lot of what "works" flies in the face of your definition, and I don't find your definition useful outside of meeting rooms.

> Your claim that you could specify an IP address in SNI is still wrong.

I never claimed that. I claimed that you could specify the textual representation of an IP address, of which dotted quad is the most common and the only form I've tested. You have agreed with this, so I won't repeat myself.

At the very outset, my point was that the definitions of these words are very context-sensitive. A nodename, a hostname, a domain name, a fully-qualified domain name, a text representation of an IP address. They have overlapping common meanings, but all are functional, working names for use in SNI, as understood by standard tools of internet technology.

Your argument seems to be that any name that works in SNI is by definition a DNS FQDN, because the SNI spec says that only DNS FQDNs work.

But that is clearly not correct in any other context. "snitest" in /etc/hosts (or NIS etc!) is not a DNS FQDN. "8.8.8.8" is not a DNS FQDN. Both work as names in SNI.


> They're obviously different things. The what that is working is not necessarily the what that the spec writers intended or imagined.

But the what is all that matters. Without a what to ask the question "does it work?" about, there is no question to ask.

For any specification, you can create a derived but incompatible specification, either implicitly or explicitly, and an implementation of that derived specification certainly works. But it works as an implementation of that derived specification, not as an implementation of the original specification, and a component implementing the derived specification won't work in the general case when interoperating with a component implementing the original specification.

Also, for any given implementation it is possible to write a spec that it conforms to, so it's trivially true that any piece of software works in the sense that it conforms to some (potential) specification.

The point of specifications is not to declare what is the only possible way to do things, but to create a convention that allows interoperability. Stuff that doesn't work according to spec fails that goal, so it only works in the sense that anything that does something does something, and possibly something useful.

> I work in network and application security. A lot of what "works" flies in the face of your definition, and I don't find your definition useful outside of meeting rooms.

Nope, especially with regards to security, anything that "works in practice", but doesn't work according to spec, has a pretty decent chance of being a vulnerability.

> I never claimed that. I claimed that you could specify the textual representation of an IP address, of which dotted quad is the most common and the only form I've tested. You have agreed with this, so I won't repeat myself.

No, you can't. You can put a string into the SNI hostname field that is an element of the "dotted quad IPv4 address representation" language. But that is by definition not the textual representation of an IP address, but the textual representation of a hostname consisting of just digits and dots. The fact that you could potentially type-pun the representation into a context where it would be interpreted as an IP address does not change that it is not in fact representing an IP address. What a (formal) language means is defined by the specification of the language, not by the specification of any language that happens to contain a given string as an element.

When a database table has a field "first name", you can not store last names in it. It's irrelevant that you can use the field to store strings in it that could be parsed as last names--the fact that they are stored in the first name field makes them first names by definition.

When a database table has a field "phone number", you can not store ZIP codes in it. It's irrelevant that you can use the field to store strings in it that could be parsed as ZIP codes--the fact that they are stored in the phone number field makes them phone numbers by definition.

When a protocol message has a field "DNS hostname", you can not store IP addresses in it. It's irrelevant that you can use the field to store strings in it that could be parsed as IP addresses--the fact that they are stored in the DNS hostname field makes them DNS hostnames by definition.

Being confused about this is the root of multiple classes of vulnerabilities (language injections (SQL injection, XSS, shell injection, header injection), string termination vulnerabilities).

> At the very outset, my point was that the definitions of these words are very context-sensitive. A nodename, a hostname, a domain name, a fully-qualified domain name, a text representation of an IP address. They have overlapping common meanings,

Yes.

> but all are functional, working names for use in SNI, as understood by standard tools of internet technology.

No. Or, if they are, those tools are vulnerable.

> Your argument seems to be that any name that works in SNI is by definition a DNS FQDN, because the SNI spec says that only DNS FQDNs work.

You are having it all backwards. This is not about "what works", but about what it is defined to be. The definition of the protocol says that whatever you find in that field of the protocol message is defined to be a DNS FQDN. It doesn't matter what the sender secretly intended--the definition of SNI says that it is a DNS FQDN. If the sender intended the content of the field to be interpreted differently, they weren't implementing SNI.

> "snitest" in /etc/hosts (or NIS etc!) is not a DNS FQDN.

As I already explained, it is, in that the SNI spec doesn't care about the data source. You could trivially create a DNS root that had a TLD "snitest.", and under that DNS root, "snitest" would be a valid DNS FQDN.

> "8.8.8.8" is not a DNS FQDN.

Yes, it is. Just don't confuse it with the IP address "8.8.8.8". In the global DNS root, the TLD "8." does not exist, therefore, it doesn't exist, but it is a valid DNS hostname in the SNI hostname field.

> Both work as names in SNI.

Yes, of course they do. But IP addresses don't.


As an aside, is there a name for this purposeful perspective of strict literalism?

I say "purposeful" because -- while you're obviously knowledgeable about the subject matter -- it can't have failed to occur to you that this approach cannot succeed outside of a very structured context.

(Discussion of whether our conversation was within or without that context omitted, though it might have been the only important discussion possible)

Is this a subcategory of the formal language / langsec efforts? Just standard standards-writing practice? Something else?


> As an aside, is there a name for this purposeful perspective of strict literalism?

Correctness?

> I say "purposeful" because -- while you're obviously knowledgeable about the subject matter -- it can't have failed to occur to you that this approach cannot succeed outside of a very structured context.

Erm ... which is why I am applying it to the extraordinarily structured context of formal languages, protocol specifications, and computer software?!

> Is this a subcategory of the formal language / langsec efforts? Just standard standards-writing practice? Something else?

I would say the langsec efforts are an attempt to raise awareness that sloppy thinking about semantics is the root of a major proportion of vulnerabilities, to establish a label for this problem, and to try and establish some sort of best practices for avoiding such problems. Good standards-writing for protocols is, of course, extremely literal, as protocol implementations necessarily will be, so any ambiguity in the standard will result in interoperability problems and possibly vulnerabilities as a result, and in the long run to unnecessary complexity as people try to plaster over the differences in interpretation between implementations to improve interoperability, thus increasing the probability for vulnerabilities even further.


But you're not correct. You're doggedly and dogmatically wrong, in the only context that matters, which is the one in which this conversation was spawned.

Again, if for the purposes of the spec, you want to apply the label "FQDN DNS name" to "8.8.8.8", that's fine and great. You can also call it a "finalized mapping token" (which has the advantage of being literally correct), or a "turtle" (which would be surprising but not misleading).

But applying a label to the data does not change the nature of the data. In the larger context, the data was created as a text representation of an IP address, was never used in a DNS context, and the concept of "fully-qualified" doesn't have a lot of meaning where there is no process by which to further qualify any partial tokens.

It remains a textual representation of an IP address, even if it is used in a different context. Just as "Alice" remains a first name even if it is mislabeled or misused.

Of course, data can fit the validation criteria for multiple types, and it can be misused. "Alice" is a first name, but it is also a valid hostname. It is not a hostname just by virtue of being validly parseable as such. And if I wanted to know the first names of people here at HN, but asked for their hostnames, I would generally not get the answer I wanted.

If some awful code somewhere misused first names as hostnames, the network guy with very limited context might say "I see a query for hostname 'Alice'", but the people with larger context would ask "Why is this firstname being misused as a hostname?".

This HN thread was never a SNI spec internal debate, and no one here benefits from assuming that highly restrictive context.

I have had discussions with some of the langsec folks in the past. I greatly respect their work, and they are wise enough to know that their context is not useful in general discussion.

Your initial statement was condescending and misleading. As the conversation continues, it becomes clear that this was intentional, and you are willing to die on the hill of tiny irrelevant context. Noted.


> But applying a label to the data does not change the nature of the data.

This is not about applying a label, this is about type-tagging. The label is irrelevant, the type tag is not.

> In the larger context, the data was created as a text representation of an IP address, was never used in a DNS context

Except that it was. As per the SNI specification, putting something into the SNI hostname field is "using it in a DNS context", which is why doing so is a bug. It's the exact same bug as putting plain text into HTML. The fact that "<" represents a "less than sign" in plain text is irrelevant to what "<" means when it appears in HTML. The semantics of HTML are not governed by the spec of plain text. Using plain text where HTML is expected is a bug. The fact that a human with a larger context might be able to recognize that the string "3 < a / b = 42" appearing in an HTML document was probably not intended to contain a malformed HTML tag does not change that it in fact does.

> Just as "Alice" remains a first name even if it is mislabeled or misused.

Essentialism, anyone?

> "Alice" is a first name, but it is also a valid hostname. It is not a hostname just by virtue of being validly parseable as such.

Exactly! It is by virtue of the context in which it appears. And the context of the SNI host name field makes whatever appears in it a DNS FQDN.

> And if I wanted to know the first names of people here at HN, but asked for their hostnames, I would generally not get the answer I wanted.

Yep. And equally, when the server implementing SNI asks you for a DNS hostname, but you supply an IP address, you are not answering the question being asked, and you should expect whatever your reply is to be treated as a DNS hostname.

> If some awful code somewhere misused first names as hostnames, the network guy with very limited context might say "I see a query for hostname 'Alice'", but the people with larger context would ask "Why is this firstname being misused as a hostname?".

Sure, they might. That doesn't change the fact that as far as the protocol is concerned, it still is a hostname, which is precisely why it will fail to work.

> This HN thread was never a SNI spec internal debate, and no one here benefits from assuming that highly restrictive context.

You claimed that you could specify IP addresses in SNI messages. You still can't.

> I have had discussions with some of the langsec folks in the past. I greatly respect their work, and they are wise enough to know that their context is not useful in general discussion.

So, this is not a discussion about whether or not you can specify IP addresses in the SNI hostname field?

> Your initial statement was condescending and misleading. As the conversation continues, it becomes clear that this was intentional, and you are willing to die on the hill of tiny irrelevant context. Noted.

You are still wrong and apparently massively confused.

Let's assume we have a server that has a certificate for the host name 1.2.3.4. That is, a certificate with a subject alternative name of type dNSName, value "1.2.3.4". Now, an HTTP client is instructed to request the URI https://1.2.3.4/foobar, POSTing a valuable secret to that URI. This HTTP client puts "1.2.3.4" into the SNI dns hostname field, as you seem to be believe to be correct behaviour according to the RFC, right? Now, the server will correctly respond to that with said certificate, right?

What happens next? Should the client accept that certificate or not? Why should it? Why not?


> So, this is not a discussion about whether or not you can specify IP addresses in the SNI hostname field?

In fact, no. This is a discussion about whether textual representations of IP addresses can be used as inputs to tools that speak SNI, be used to specify a particular cert on the server side, and be conveniently extracted out of the sniffed network traffic comprising that handshake.

As it happens, the input conversion, the processing for usage in code written for spec implementation, the network stack conversions, and the sniffer capture reconversion back to a text representation for human viewing are all manipulative of the data.

But all of these manipulations are predictable and reversible, and if you want to call one of those data stages a "DNS FQDN", that's great but it isn't inherently correct, outside of the context of the spec which deigns to treat all final mapping tokens as DNS FQDNs, and to label them such -- but does not actually make them fully-qualified, nor the results of DNS queries.

We might have different opinions about the context of this discussion, but I would suggest that if you were to reread the thread from the beginning, there's really not much opportunity for confusion.

In any event, it's clear that this discursion is not advancing anyone's understanding of anything. Good luck in your future endeavours.


Your literalism defeats you.

Of course you can store first names in a last name field. You can even choose to invert the label-semantic relationship between the two fields in your code. If "Bob" is stored in last_name, the data does not semantically become something it is not just because someone mislabeled it. It's a first name, no matter who wants to call it what.

"8.8.8.8" is not a DNS FQDN. Of course it could be, but it is not, in current practice. If you think of it as a DNS FQDN, you cannot assume it to have any relationship to IP address 8.8.8.8, which in all resolver libraries, it does. So it is not a DNS FQDN, and in fact is not even looked up when supplied. It is very precisely only ever used as a textual representation of an IP address, in dotted quad format. The direct representation of an IP address could not even be posted here. So it is as close and most familiar as we can get to an IP address. And I was always careful to qualify my comments that it was a textual representation of an IP address. Because I know the difference!

You also know that it is possible for "snitest" (or, more precisely, "snitest.") to be a DNS FQDN. But it almost never is. In practice, it is going to be a nodename, and it might not even be looked up in DNS. It's a token, a name, representing the final form of the query made by the client, that resulted in a returned address. And that is what I said all along.

The SNI spec does not get to redefine "IP address (textual representation of)" and "nodename" outside of its own context.

You could have been correct if you said, at the beginning, that while it might be a nodename or an IP address (text representation) in all ordinary contexts, the SNI spec defines anything that results in a valid mapping to an IP address as a "DNS FQDN". And isn't that interesting and not particularly useful? It could also call them "turtles", with equal usefulness.

But instead you accused me of not having read the spec, and you proclaimed that SNI only works with turtles.

You're wrong on both accounts. And we disagree about the definition of "working". Security exploits work. There is no deep epistemological problem created by the fact that the word is given meaning by its context.

Nodenames work in SNI. And textual representations of IP addresses work in SNI. Which is exactly what I wrote in my initial comment.

Since we can't agree on fundamental definitions, I don't think we're going to understand each other. I stand by my initial assertions.


> Of course you can store first names in a last name field.

Nope.

> You can even choose to invert the label-semantic relationship between the two fields in your code.

In which case you aren't storing first names in the last name field, you simply are using an uncommon label for the first name field. Whether something is the first name field is not defined by its label, but by the declaration of its purpose, either explicitly in a specification or implicitly in your code. Having a descriptive, non-misleading label is simply helpful for maintainability, but not relevant to the discussion at hand.

> If "Bob" is stored in last_name, the data does not semantically become something it is not just because someone mislabeled it.

I am assuming you mean a field that is actually declared to contain last names (as opposed to a field that is declared to contain first names, but labeled "last_name"), as that was the premise in my previous arguments:

Are you saying that your software will not produce letters saying "Dear Mr. Bob" if it finds "Bob" in that last name field?

> "8.8.8.8" is not a DNS FQDN.

Well, yes it is. Saying '"8.8.8.8" is not a DNS FQDN' is like saying '"Martin" is not a last name'. Just because you will likely categorize "Martin" as referring to a first name without further context, does not mean it is not a last name.

> Of course it could be, but it is not, in current practice.

Except it isn't current practice.

> If you think of it as a DNS FQDN, you cannot assume it to have any relationship to IP address 8.8.8.8,

Well, yeah, that's my point!?

> which in all resolver libraries, it does.

No, not all resolver libraries, and none of the resolver libraries relevant to this discussion. Take a pure DNS library, feed in 8.8.8.8 as the DNS domain name, and get back NXDOMAIN (or possibly something else in an alternate root). What Unix host name resolver APIs do is irrelevant as that is not the convention referenced by the RFC, the RFC references DNS hostnames.

> So it is not a DNS FQDN, and in fact is not even looked up when supplied.

Just because Unix hostname resolution APIs happen to be unable to resolve the DNS FQDN 8.8.8.8, does not make it not a DNS FQDN. Are you telling me next that _service-name.example.com is not a DNS FQDN because underscores are not allowed in internet hostnames? That format is used for SRV and ACME lookups precisely because it is not an internet hostname, but still a DNS FQDN, so it cannot collide with hostnames, but still can be looked up in the DNS.

> It is very precisely only ever used as a textual representation of an IP address, in dotted quad format.

So ... four-number OIDs don't exist?

> The direct representation of an IP address could not even be posted here.

Which doesn't change that not every representation that takes the form "1.2.3.4" is a representation of an IP address, just as not every representation that takes the form "Martin" is a representation of a first name. The fact that you can not post Martin (the human being) here, does not change the fact that the string "Martin" can be a reference to Ms. Jane Martin.

> So it is as close and most familiar as we can get to an IP address.

And yet, if you find that string in the SNI hostname field, it does not represent an IP address.

> And I was always careful to qualify my comments that it was a textual representation of an IP address. Because I know the difference!

But syntax alone does not define semantics, context matters. The mere fact that some string can be read as a textual representation of an IP address does not actually make it a textual representation of an IP address.

> You also know that it is possible for "snitest" (or, more precisely, "snitest.") to be a DNS FQDN. But it almost never is.

Which is irrelevant to the semantics of the SNI hostname field.

> In practice, it is going to be a nodename, and it might not even be looked up in DNS.

As above: Feed it into a DNS resolver library and find that it can indeed look up the TLD "snitest". That "Martin" is used in practice as a first name is irrelevant when the field in a form that someone wrote in the string "Martin" is labeled "last name".

> The SNI spec does not get to redefine "IP address (textual representation of)" and "nodename" outside of its own context.

It doesn't. It explicitly references DNS hostnames. It never says anything about "nodenames". And it even explicitly states that literal IP addresses are forbidden (and mind you that "literal" here does not mean "four bytes", but dotted quad or hex-colon notation, which are commonly refered to as "IP literals" in RFCs).

Also, the RFC even explicitly says 'It is RECOMMENDED that clients include an extension of type "server_name" in the client hello whenever they locate a server by a supported name type.'. The spec in no way redefines anything, it simply says "if you happen to have located the server using a DNS hostname, you may put it here, if you used anything else, you cannot use this extension".

> You could have been correct if you said, at the beginning, that while it might be a nodename or an IP address (text representation) in all ordinary contexts, the SNI spec defines anything that results in a valid mapping to an IP address as a "DNS FQDN".

It doesn't, and thus that would have been incorrect, which is why I didn't say that.

> But instead you accused me of not having read the spec, and you proclaimed that SNI only works with turtles.

Well, at least you haven't understood the spec?

> Security exploits work.

Yep. But does a server that crashes when the security exploit works work?

> There is no deep epistemological problem created by the fact that the word is given meaning by its context.

Except that that is not the problem. The problem is that you are equivocating semantically different instances of "work".

"I have a car with a broken transmission that I live in. It works fine for keeping me warm. So, the car works. I want to sell the car. As the car works, people should pay me the price for a working car."

You see the flaw in that reasoning, right? That's the fallacy in your reasoning. Saying "the car works for keeping me warm" is perfectly fine, but it does not imply "the car works", because that would imply "... for doing whatever is commonly understood to be the primary purpose of cars".

Noone is denying that you can specify and implement a protocol where you can specify IP addresses for the selection of the TLS certificate to use. If you do so, that works. But that protocol is not SNI. So, if you ask whether that is a working SNI implementation: No, it's not.

> Nodenames work in SNI.

If you mean by that unqualified hostnames: No, they don't. You only can agree between clients and servers that are controlled by you on a method of encoding nodenames as DNS hostnames, and then use SNI with those.

> And textual representations of IP addresses work in SNI.

No, they don't. You can not express IP addresses in SNI.


What works fine that is not syntactically a fully qualified DNS hostname, and what could you even test about that?


> SNI reveals the full text of the final lookup query that the requester used to obtain an IP address to open a TCP connection to the server.

Are you talking about something other than DNS lookups done at the client side?


Yes, that is how I tested it, but there's nothing magic about DNS either. Remember that this depends on the site owner and the users cooperating to put misleading data into the SNI, while still functioning properly. It isn't often useful, but it is sometimes convenient.

Basically, whatever the client does that results in a successful map of name to address (including the textual representation of the address, to the address) will cause the name to be sent in SNI, and will be used to select a matching cert on the server side.

If you do use DNS, your lookup might have a suffix appended automatically, depending on local nameserver config.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: