It took me an embarassingly long time (given how keenly involved I was in OpenID stuff ~17 years ago https://simonwillison.net/search/?tag=openid&year=2007) to understand that OpenID Connect is almost unrelated to the original idea of OpenID where your identity is a URL and you can prove that you own that URL.
OpenID Connect is effectively an evolution of OAuth.
You may already know this, I'm writing it as a note for my future self.
OpenID Connect (OIDC) is mostly concerned with authentication. On the other hand, OAuth (or, to be more specific, OAuth v2.0) is concerned with authorization.
>> OpenID Connect is effectively an evolution of OAuth.
In my opinion, OpenID Connect is actually evolution of OpenID – in its vision/spirit:
- OIDC, like OpenID, primarily focuses on users' identity and authentication;
- OIDC, unlike OpenID, didn't (re)invent new authentication workflows, which were significantly different in their own ways. Instead, it built authentication workflows on top of existing OAuth spec (which was being (ab)used for authentication in some places which, unfortunately, is still the case) to achieve its main objective (i.e. authentication).
---
Edit: rephrased to better communicate my thoughts (still not perfect; but, as the saying goes, perfect is the enemy of the good so I stop here).
> OIDC, unlike OpenID, didn't (re)invent new authentication workflows, which were significantly different in their own ways. Instead, it built authentication workflows on top of existing OAuth spec
Didn't OpenID predate OAuth? What should OpenID have built upon?
>> Didn't OpenID predate OAuth? What should OpenID have built upon?
Yes, you're right about "OpenID predate OAuth" part.
However, from my point-of-view, it seems the main source of confusion here is due to the fact that the word OpenID is used in more than one sense:
- First, OpenID used as part of the original OpenID authentication protocol developed around 2005 which communicates the idea of a decentralized online digital identity where one way a user can asserts their online digital identity is via a URL under their control.
- Second, OpenID used as part of the compound noun in "OpenID Connect" (which as per Wikipedia is "third generation of OpenID technology", published in 2014[1]) which implements the user identity and their authentication via authentication workflows built on top of OAuth2 spec.
Now, in my comment earlier i.e. "OIDC, unlike OpenID, ... built on top of existing OAuth spec ... to achieve its main objective ...", I was using OIDC (with "OpenID") in the second sense in comparison to the original OpenID authentication protocol where OpenID is used in the first sense (with both senses mentioned above).
I hope it helps.
---
As an aside, looking at all the comments about "OpenId" and "OpenID Connect" as nouns, I'm reminded of the following post: Two Hard Things[2]
My problem with it being called OpenID Connect is that, in my head, an OpenID is a noun which means "a URL that you can use as your identity and prove that you own".
That definition doesn't work for OpenID Connect. Is OpenID a noun any more? I don't think it is.
OpenID Connect can totally work that way if used with WebFinger for endpoint discovery, and occasionally this is implemented (though many websites do not).
Hm, so the point of adding this additional hop (which is also a JSON under the .well-known/ prefix), is that I can always put the domain of my homepage into WebFinger aware OIDC login boxes, no need to remember the domain of my OIDC provider?
I feel like I remember StackOverflow (and related sites) having OpenID login as an option, but I don't see it anymore. I figure they removed it due to low popularity.
> OpenID Connect (OIDC) is mostly concerned with authentication. On the other hand, OAuth (or, to be more specific, OAuth v2.0) is concerned with authorization.
That's a common refrain and it's quite inaccurate in that it's using a rather unorthodox definition of these terms. It's like the classic "The United States is not a Democracy, it is a Republic", where the speaker reinterprets Democracy as "Direct Democracy" and Republic as "Representative Democracy"[1].
Same goes for "authorization" and "authentication" in OAuth and OIDC. In the normal sense, authentication deals with establishing the user's identity, while authorization determines what resources the user can access.
Open ID Connect does indeed deal with authentication: it's a federated authentication protocol. The old OpenID also tried to introduce the concept of a globally unique identity in addition to authenticating that identity, as the GP mentioned. But OpenID Connect still supports federation: An application (the consumer) can accepts users logging in through a completely unrelated service (the identity provider). I believe this was originally specified mostly with the idea of third-party authentication in mind (using Google or Apple to log in to another company's service or conversely using your corporate SSO to log in to a SaaS web app), but microservices are very popular nowadays, even services that don't support external login often use OIDC as the authentication protocol between their authentication microservice and other services.
OAuth on the hand, started as a method for constrained access delegation. It allowed web services to issue a constrained access token that is authorized for doing a certain set of operations (defined by the "scope" parameter and often explicitly approved by the user). Getting constrained access through OAuth requires performing authentication, so you can say OAuth is also an authentication standard in a sense. But since OAuth was designed for pure deelgation, it does not provide any user identity information along with its access token, and there was no standard way to use it for federated authentication. Open ID Connect essentially takes OAuth and adds identity information on top. To make things more complicated, there have been a lot of OAuth specifications published as RFCs[2] over the last decade, and a lot of them deal with client authentication and explicitly mentioning Open ID Connect (since arguably most OAuth implementations are also OIDC implementations now).
In short, Open ID Connect is quite accurately described as an Authentication standard. But OAuth 2.0 has little to do with Authorization. It allows clients to specify the "scope" parameter, but does not determine how scopes are parsed, when user and client are permitted to request or grant a certain scope and what kind of access control model (RBAC, ABAC, PBAC, etc.) is used. That's ok, since it leaves the implementers with a lot of flexibility, but it clearly means OAuth 2.0 is not an authorization standard. It only concerns itself with requesting authorization in unstructured form[3].
Better examples for proper authorization standards are declarative authorization specification DSLs like XACML, Alfa, Rego (the language using in Open Policy Agent). I guess you could also put Radius in as an example of a protocol that implements both Authentication and Authorization (although the authorization is mostly related to network resources).
---
[1] To be fair, the meanings of "Democracy" and "Republic" have changed over time, and back in the 18th century, when the US was founded, it was popular to view the Athenian Democracy as the model case of a pure direct democracy and to use the term "Democracy" in this sense. Over time, the meanings changed and we got a weird aphorism that remains quite baffling to us non-Americans.
[3] RFC 9396 is a very recently published optional extension to OAuth that does structured authorization requests, and defines a standard method for resource server to query the requested and granted authorization data.
Agree. I'd say openid connect looks closer to SAML in terms of authenticating users and bootstrapping a "sessions" if you will. OAuth2 in my mind is one potential approach to maintaining a session, post initial authentication, used for ongoing authentication on a per-request basis. It also has information about which client the session is associated with to allow for per-client authorization decisions to be done via the authorization mechanisms you mentioned above.
Basically both are concerned with different parts of Authentication, initial vs on-going (though 2 legged Oauth2 is also an initial authentication step).
The line between authentication and authorization can get quite fine, especially if the authorization policy is as simple as "this set of services can talk to me" if you use mTLS and a fixed set of trusted services in a trust-store.
> In short, Open ID Connect is quite accurately described as an Authentication standard. But OAuth 2.0 has little to do with Authorization. It allows clients to specify the "scope" parameter, but does not determine how scopes are parsed, when user and client are permitted to request or grant a certain scope and what kind of access control model (RBAC, ABAC, PBAC, etc.) is used. That's ok, since it leaves the implementers with a lot of flexibility, but it clearly means OAuth 2.0 is not an authorization standard. It only concerns itself with requesting authorization in unstructured form[3].
This misses the mark - scopes are abstractions for capabilities granted to the authorized bearer (client) of the issued access token. These capabilities are granted by the resource owner, let's say, a human principal, in the case of the authorization_code grant flow, in the form of a prompt for consent. The defined capabilities/scopes are specifically ambiguous as to how they would/should align with finer-grained runtime authorization checks (RBAC, etc), since it's entirely out of the purview of the standard and would infringe on underlying product decisions that may have been established decades prior. Moreover, scopes are overloaded in the OAuth2.0/OIDC ecosystem: some trigger certain authorization server behaviours (refresh token, OIDC, etc), whereas others are concerned with the protected resource.
It's worth noting that the ambiguity around scopes and fine-grained runtime access permissions is an industry unto itself :)
RFC 9396 is interesting, but naive, and for a couple of reasons: 1) it assumes that information would like to be placed on the front-channel; 2) does not scale in JWT-based token systems without introducing heavier back-channel state.
I personally do not view OIDC as an authentication standard - at least not a very good one - since all it can prove is that the principal was valid within a few milliseconds of the iat on that id_token. The recipient cannot and should not take receipt of this token as true proof of authentication, especially when we consider that the authorization server delegates authentication to a separate system. The true gap that OIDC fills is the omission of principal identification from the original OAuth2.0 specification. Prior to OIDC, many authorization servers would issue principal information as part of their response to a token introspection endpoint.
> Same goes for "authorization" and "authentication" in OAuth and OIDC. In the normal sense, authentication deals with establishing the user's identity, while authorization determines what resources the user can access.
"Authorization", in the context of OAuth 2.0, means whether a third-party application is "authorized" to take actions on-behalf-of a resource-owner on some resource server. And, if the answer is yes, what is the "scope" of this "authorization".
From the OAuth 2.0 RFC's abstract[1]:
The OAuth 2.0 authorization framework enables a third-party
application to obtain limited access to an HTTP service, either on
behalf of a resource owner by orchestrating an approval interaction
between the resource owner and the HTTP service, or by allowing the
third-party application to obtain access on its own behalf. ...
It's very clear that OAuth2 is all about third-party application and their access to a "resource owner" resource. As far as users and their access to their own resources are concerned, they're "resource owner" and they've all the "power" to do whatever they like (with their own resources–off course).
For example, in the early days of Facebook, FarmVille games needed user permission in order to post on users Facebook walls and/or message users' friends if something interesting happened in the FarmVille while users are/were playing. And this is just one funny example to get across my point; there're many use-cases where it's super useful if users can grant a third-party application permission so that they can do some useful work (whatever it happens to be) on their behalf.
> Better examples for proper authorization standards are declarative authorization specification DSLs like XACML ...
I'm very well familiar with XACML and similar standards about access control policies; actually, I've build/developed an ABAC-based access-control service using XACML-like spec. for one of our customer-facing business application (in the recent past).
Yes, XACML and similar specs. are good for some use-cases for user access / "authorization" (based on business needs and threat-model). Yet, I'm not sure anyone would recommend them for third-party application authorization. Off-course, it's not impossible and it can be done; however, I doubt anyone would recommend doing it when simpler solutions are available–unless there is a strong business case from the business risks and security (threat-modelling) point-of-view.
Also interesting is that OAuth2 is a bit too flexible in how you can put things together and OIDC provides a lot of good practice about that how.
So even systems where OIDC compliance is a non goal often are partially OIDC compliant, I mean there really is no reason to reinvent the wheel if part of the OIDC standard already provide all you need.
2.1 is mainly updating 2.0 with various later RFCs and usage recommendations, many of them are not drafts. And some document used e.g. https://datatracker.ietf.org/doc/html/draft-ietf-oauth-secur... are technically a draft but practically a "life" document constantly kept up to date
So while technically 2.1 is a draft, practically not following it means not following best practices of today so while you don't have to meet it fully if you care about security you really should use it as strict/strong recommendations. At least _longterm_ not doing so could be seen as negligence for high(er) security applications.
OIDC is still far more flexible than I would have liked. It still allows the implicit flow, and it even created a new abominations that didn't exist in OAuth: The Hybrid Flow. If you just want to follow best practices, OAuth 2.1[1] or OAuth Best Current Practices[2] are far better options.
Yes, indeed. Both OAuth 2.1 & the BCP tighten things up a lot, although neither is technically final yet (the security BCP should be published as an RFC "any day now").
For people looking for an easy-to-follow interoperability/security profile for OAuth2 (assuming they're most interested in authorization code flow, though it's not exclusive to that) FAPI2 is a good place to look, the most recent official is here:
On the flip-side, it is much more complex to implement than OAuth 2.1, since it mandates a lot of extra standards, some of them very new and with very little to go in the way of library support: DPoP, PAR, private_key_jwt, Dynamic Client Registrations, Authorization Server Metadata, etc.
Except for PAR, these extra requirements are harder to implement than their alternatives and I'm not sold that they increase security. For instance, DPoP with mandatory "jti" is not operationally different than having a stateful refresh token. You've got to store the nonce somewhere, after all. Having a stateful refresh token is simpler, and you remove the unnecessary reliance on asymmetric cryptography, and as a bonus save some headaches down the road if quantum computers which can break EC cryptography become a thing.
In addition, the new requirements increase the reliance on JWT, which was always the weakest link for OIDC, by ditching client secrets (unless you're using mTLS, which nobody is going to use). Client secrets have their issues, but JWT is extremely hard to secure, and we've got so many CVEs for JWT libraries over the years that I've started treating it like I do SAML: Necessary evil, but I'd minimize contact with it as much as I can.
There are also some quirky requirements, like:
1. Mandating "OpenID Connect Core 1.0 incorporating errata set 1 [OIDC]" - "errata set 2" is the current version, but even if you update that, what happens if a new version of OIDC comes out? Are you forced to use the older version for compliance?
2. The TLS 1.2 ciphers are weird. DHE is pretty bad whichever way you're looking at it, but I get it that some browsers do not support it, but why would you block ECDSA and ChaChaPoly1305? This would likely result in less secure ciphersuites being selected when the machine is capable of more.
In short, the standard seems to be much better than than FAPI 1.0, but I wouldn't say it's in a more complete state than OAuth 2.1.
DPoP isn't mandated, MTLS sender constrained access tokens are selected by a lot of people instead of DPoP. (And yes I agree, MTLS has challenges in some cases.)
Stateful refresh tokens have other practical issues, we've seen several cases in OpenBanking ecosystems where stateful refresh tokens resulted in loss of access for large numbers of users which things went wrong.
The quirks you mention are sorted in the next revision. The cipher requirements come from the IETF TLS BCP [1] (which is clearer in the new version). If you think the IETF TLS WG got it wrong, please do tell them.
As other people said elsewhere, this isn't about completeness - OAuth2.1 is a framework, FAPI is something concrete you can for the large part just follow, and then use the FAPI conformance tests to confirm if you correctly implemented it or not. If you design an authorization code flow flowing all the recommendations in OAuth 2.1, you'll end up implementing FAPI. Most people not in this space implementing OAuth will struggle to know how to avoid the traps once they stop following the recommendations, as "implementing OAuth securely" isn't usually their primary mission.
How common is it to use MTLS in the user-to-service use case (e.g. browsers with mTLS configured)? I mean for (potentially external) service-to-service authentication it's way easier then for user(browser,app)-to-service.
Preferable the intersection of OIDC, OAuth 2.1 and Best Current Practices.
As in use OAuth2.1 recommendations including OAuth Best Current Practices to structure you clients inside of the OIDC framework which mostly will "just work" (if you are in control of the auth server) as it only removes possible setups you decide not to use from OIDC.
Through I'm not sure if requiring (instead of just allowing) PKCE is strictly OIDC compliant but implementations should support it anyway ("should" as in it recommended for them to do so, not as they probably do so).
"Through I'm not sure if requiring (instead of just allowing) PKCE is strictly OIDC compliant"
It's technically not compliant, but people definitely do so, and there are definite security advantages to requiring it.
Technically the 'nonce' in OpenID Connect provides the same protections, and hence the OAuth Security BCP says (in a lot more words) that you may omit PKCE when using OIDC. However in practice, based on a period in the trenches that I've mostly repressed now, the way the mechanisms were designed means clients are far more likely to use PKCE correctly than to use nonce correctly.)
Both systems have grown too much and are too complicated. At some point someone will replace it with something easier.
I used countless ID providers and while they offer a very valuable service, the flows are too complicated and many implementations have a lot of security flaws because the users don't understand the implications.
OpenID Connect is an extension to OAuth2 (RFC 6749) that adds an authentication layer (to identify a user) on top of the OAuth2 authorization framework (that grants permissions)
Earlier versions of OAuth (1.0 and 1.0a) and OpenID (1 and 2) are unrelated, incompatible protocols that share similar names but are largely irrelevant in 2024.
There are LOTS of them. Anything that allows you to link your Google/Facebook/etc. account to another system, so that system can perform actions on your Google/Facebook/etc. account on your behalf.
Examples: Slack (e.g., notify you of events on your calendar, create a GMeets meeting), services like cal.com, whatsapp (store backups on your Google Drive).
It's rare in my experience. We don't support OIDC, so technically it's standalone oauth. In reality there's of course a user identity in the mix used to authorize the resulting access tokens.
Even server to server calls, ie daemons, service principals, what have you, still rely on a client identity.
I think the closest to true agentless access I've seen widely used are SAS for Azure Storage and of course deploy keys in GitHub, which we're building off ramps for. Agentless authz just is not a good idea
I mean it is usually paired with an id token, an identifier like an email address is provided, or the access token has a sub claim that is tied back to the user.
So it is not pure authorization, but both authentication and authorization.
Pure authorization would be like a car key, with no identity mixed in.
Yeah, it's a profile on top of OAuth, which leverages aspects (the authorization code grant, tokens) but adds some other functionality (another token with authentication information and some defined claims). I'm not aware of any other profiles with anywhere near the uptake of OIDC.
There are a few folks out there doing pure OAuth, but much of the time it is paired with OIDC. It's pretty darn common to want someone to be authenticated the same time they are authorized, especially in a first party context.
No, OIDC is not an Evolution of OAuth. One does authentication, the other authorization. Two very different, but often intertwined concepts, where that both can also be used without requiring the other.
“”” David Harris, author of the email client Pegasus Mail, has criticised OAuth 2.0 as "an absolute dog's breakfast", ””” https://en.m.wikipedia.org/wiki/OAuth
I keep on trying and failing to implement / understand OAuth 2 and honestly feel I need to go right back to square one to grok the “modern” auth/auth stack
It’s funny, I’m the opposite. I love OAuth for what it does, that is, federate permission to do stuff across applications. It makes a lot of cool integration use cases possible across software vendor ecosystems, and has single-handedly made interoperability between wildly different applications possible in the first place.
I’d say it definitely helps to implement an authorisation server from scratch once, and you’ll realise it actually isn’t a complex protocol at all! Most of the confusion stems from the many different flows there were at the beginning, but most of it has been smoothened out by now.
Eran Hammer (the author of OAuth 1.0 and original editor of the OAuth 2.0 spec) resigned during the early draft specification process and wrote a more detailed criticism[1].
I don't think I agree with every point he makes, but I think he had the right gist. OAuth 2.0 became too enterprise-oriented and prioritized enterprise-friendliness over security. Too many implementation options were made available (like the implicit grant and the password grant).
I wasn't really following OAuth back in those days, but I have heard much of the history from those that were there are the time, and there were some of the failures of some of the early specs in this area for being too secure - and hence to hard to implement and failing to be adopted.
Was OAuth2 wrong to land exactly where it did on security back in 2012 or before? It seems really hard to say - it clearly didn't have great security, but it was easy to implement and where would we have ended up if it had better security but much poorer adoption?
Does the OAuth working group recognise those failures and has it worked hard to fixed them over the years since? Yes, very much so.
Has OAuth2 being adopted in use cases that do require high levels of security? Yes, absolutely, OpenBanking and OpenHealth ecosystems around the world are built on top of OAuth2. In particular the FAPI profile of OAuth2 that gives a pretty much out-of-the-box recipe for how to easily comply with the OAuth security best current practices document, https://openid.net/specs/fapi-2_0-security-profile.html (Disclaimer: I'm one of the authors of FAPI2, or at least will be when the next revision is published.)
Is it still a struggle to try and get across to people in industries that need higher security (like banking and health) that they need to stop using client secrets, ban bearer access tokens, to mandate PKCE, and so on? Yes. Yes it is. I have stories.
Back in 2012, TLS was not enabled everywhere yet. OAuth 1.0 was based on client signatures (just like JAR, DPoP etc., but far simpler to implement) and it was a good fit for its time. One of Eran Hammer's top gripes with the direction OAuth 2.0 was going for is removing cryptography and relying on TLS. I think this turned out to be a good decision, in hindsight, since TLS did become the norm very quickly, and the state of cryptography at IETF during that period (2010) was rather abysmal. If OAuth 2.0 did mandate signatures, we'd end up with yet another standard pushing RSA with PKCS#1 v1.5 padding (let's not pretend most systems are using anything else with JWT).
But that's all hindsight is 20:20, I guess. I think the points that withstood the state of time more is about how OAuth 2.0 was more of a "Framework" than a real protocol. There are too many options and you can implement everything you want. Options like the implicit flow or password grant shouldn't have been in the standard in the first place, and the language regarding refresh tokens and access tokens should have been clearer.
Fast forward to 2024, I think we've started going back to cryptography again, but I don't think it's all good. The cryptographic standards that modern OAuth specs rely on are too complex, and that leads to a lot of opportunity for attacks. I'm yet to see a single cryptographer or security researcher who is satisfied with the JOSE/JWT set of standards. While you can use them securely, you can't expect any random developer (including most library writers) to be able to do so.
It would fix a lot of the provider specific aspects of OAuth2, if the spec would be more strict on some claim (attribute) names on the jwt ID token. Some provide groups, some don't. Some call it roles or direct_groups. Some include prefered_username, some don't. Some include full name, some don't and don't get me started on name and first_name.
If you implement OIDC you must certainly provide a configurable mapping system for source claim name to your internel representation of a user object.
No aspect of this is good for anyone. First, standards you have to pay to obtain are a really, really bad thing. Second, I wish more effort would go into designing standards and implementations that aren't such an endless time sink when you need them.
I agree about ISO, but I don't think there's a meaningful "toll gate" in this case: the standards are already free and public, this seems to just assign them identities in the ISO's standardization namespace.
(I'm at a loss to explain what benefit comes from being assigned an ISO standard versus putting a HTML document on the Internet.)
> (I'm at a loss to explain what benefit comes from being assigned an ISO standard versus putting a HTML document on the Internet.)
From the article:
"[ISO certification] should foster even broader adoption of OpenID Connect by enabling deployments in jurisdictions around the world that have legal requirements to use specifications from standards bodies recognized by international treaties, of which ISO is one."
The point was that countries clearly recognize standards that aren't bound to an ISO (or other international standards) process, given that every country in the world uses TCP, HTTP, and HTML.
(Unless we're now considering the IETF/W3C an international standards body? I can't find a good list of these anywhere.)
That's fair. And this type of standardization is far enough outside my wheelhouse that I don't know how to judge Mike's comments. He's pretty deep in that space, so I take it at face value. I don't think he'd have pushed this effort without there being value.
Looked on the OpenID mailing list site[0] and didn't find any discussion, so can't offer any other insight. I suppose you could contact Mike[1] and ask why it is such a big deal)?
> Unless we're now considering the IETF/W3C an international standards body?
Most of what I know about standards bodies I learned from Heather Flanagan, who is/was active in a lot of these and did a great presentation at Identiverse in 2022[2] about this very topic.
>The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web.
>The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP).
I would also add IEEE to this list. I think it's pretty clear these groups are international standards organizations, it think it's pretty odd that OpenID Connect wasn't circulated as an RFC and they went the ISO route.
Many standards are ISO standards as well as a standard from some other body. I have some involvement with the floating point standard, and that is mostly an IEEE standard, but the chair of the committee does ISO standardization as well for each revision.
Countries recognize standards broadly along two avenues:
* Internationally recognized standards organizations, such as ISO (literally International Organization for Standardization). Republic of Backwoods and Kingdom of Flyover can't really do much against the majority of the whole world agreeing on something.
* Bigger Gun and/or First Past The Post Adoption, mostly exercised by the US in recent decades. Examples include practically all IT standards, aviation standards, and so on.
If you manage to combine them, you're basically unstoppable at conquering the world.
Any sort of government or similarly "official" organization loves to refer to ISO standard XXXX instead of writing out a summary of the standard when they document things.
Sometimes you see the same thing with organizations referring to web RFCs. It's likely because of a general culture of "don't try to invent new things if you already have a reference for it", although it doesn't really tend to make those documents readable.
> (I'm at a loss to explain what benefit comes from being assigned an ISO standard versus putting a HTML document on the Internet.)
Single source of truth. The internet has been plagued by numerous incompatible implementations of the same thing. There are numerous tests [0] showing incompatibility between simple serialization format JSON.
How many times have you heard "Yeh, nice feature, but virtually nothing implements it"? A standard becomes whatever majority of highly adopted implementations do instead of formal specification. This is what you get for putting a HTML document on the internets. ISO standardization somewhat reduced this effect.
> but I don't think there's a meaningful "toll gate" in this case: the standards are already free and public
Major problem with ISO standards is that they cross-reference each other. It's rare NOT to find definition "X as defined in ISO 12345". Complex product may need to reference hundreds of ISO standards.
Somewhat tautologically I agree with you as in reality things are probably going to be implemented referencing tutorial subtly incompatible tutorials on the internet but will claim ISO compatibility.
So to get a single source of truth we make, presumably, the same truth have more sources.
I think I know what you mean (sources as in standards organizations, not individual standards), but I also think that people arrive at this odd position because they aren't actually thinking about the practicality and the absurdity of making the world more complex and confusing.
> I don't think there's a meaningful "toll gate" in this case: the standards are already free and public
See Adobe and PDF: PDF 1.7 was available gratis from Adobe and also (“technically identical to”) an ISO standard. At the time, people expressed concerns about ISO’s paywalls and Adobe reassured them there was an agreement to ensure that wouldn’t happen. Indeed it did not... until PDF 2.0 came along, developed at the ISO, and completely paywalled.
I seem to remember (but don’t quote me on that) that AVIF and JPEG XL standards were at one point downloadable free of charge. In any case, they aren’t today.
Historically the IETF has been reluctant to get involved with Identity (and hence authentication) for various reasons. There are a few standards bodies in this area and they all have their strengths and weaknesses (the presentation by Heather Flanagan someone linked to elsewhere in the thread gives a good introduction).
Even some RFCs are basically available as ISO standards and vice versa, e.g. for time/date formats you almost never need to buy ISO8601 and can just read RFC3339 (which is technically a 'profile' of ISO8601).
Standards are nice, but the large standards organizations like ISO annoyingly charge a bit to view them. I suppose this is because some businesses/industries require "real" standards by those orgs rather than the IETF or other dirty open-source hippie collectives.
"Standard" and "costs money" feel at odds with each other. If you want something to become standard, as in the standard/most common way of doing something, it has to be abundantly accessible so that it can be widely implemented.
It's a double edged sword. Actually creating a good standard that people want to use through an open process that aims to be unbiased takes a non-trivial amount of time and hence costs a not insubstantial amount of money. Different standards organisations have chosen different approaches to solving that problem, and although I completely agree and freely available standards are my preferred approach, it is also very clear that ISO standards are well respected and widely used despite the need to pay to view them in some cases.
Some kind of funding model where large corporations can pay to have a standard written would be ideal. Even then it seems a bit odd. Web APIs don't seem to have this problem. Just have the big corps donate some engineers instead. I don't know, I guess nothing is perfect.
Most standards are low priced enough that they are sold at a loss. If you would prefer to donate to ISO (or the IEEE for another example) instead, that would help allay the cost of writing a standard.
My understanding it is quite often government/country contexts where (because ISO is recognised in various international treaties) it is easier to get approval to use an ISO standard than it is to use an OpenID Foundation standard. So getting OpenID Connect published with an ISO number just makes adoption easier for some projects.
OpenID Connect does of course remain free view/use, but now people in the above situation have an easier option available to them.
ISO is non free garbage which is not helping the software ecosystem. Take ISO 8601, it is overly complicated, not implemented correctly most of the time as maintainers used a free draft and it does not actually solve anything properly. (For example you cannot represent wall clock which is a problem for dates in the future as time zone changes).
I also worked with mp4 in the past only to realize that the ISO was not enough as Apple had some changes in their stack.
> For example you cannot represent wall clock which is a problem for dates in the future as time zone changes
While I understand this particular frustration, in my book it is a feature. The critique usually devolves into hypothetical scenarios, e.g. changes to DST. "I want to be able to specify 14:00 in four years in Absurdistan local time, whatever that is in relation to UTC, but cannot!" is a common critique of ISO 8601. However, if you go a little bit further with hypotheticals, Absurdistan might add some overseas territories, might join some alliance and change timezone/DST, etc.
When you think about the problem statement, definition of "local time" itself may change, therefore it is impossible to specify "local time" in the future without *exhaustively* defining all possible changes.
So you either define number of atomic ticks (TAI) in the future and resolve local time at the time of use or specify some static time and resolve it to local time at the time of use.
Are there any alternatives to iso8601? My only beef with it is that there seems to be more than 1 way to represent a few things. Didn't know about the wall clock issue.
I wonder if the new JS temporal API handles that.. they went pretty deep.
RFC 3339 should be preferred to iso, but it does not solve the wall clock issue. In a calendar application I worked on, we use city name and lat/lng + "naive" timestamp (yyyy-mm-dd hh:mm or moon based date or other specialties like Japanese year depending on the locale). And we "resolve" to UTC + offset when the date is close.
See even RFC 3339 seems unnecessarily lax. You can use _ or lowercase t or space as the date and time separator? Why? Just pick one instead of complicating all the parsers.
The intersection of the two specs seems pretty good though.
With C++, the latest draft of the standard is made available for free [0]. My understanding is that the final draft and the official standard are more or less same w.r.t. their material content. I imagine the draft standard for OIDC is also available somewhere.
Almost every standard is like this. If you just want to implement, you take the last draft that's public. The process between that draft and a standard is an extensive review and editing process to ensure that wording is exactly precise, patent claims against the standard are void or invalid, and that there are no other problems. That stuff takes time and money.
Not every MUST be black and white. It is okay to admit things are grey. Those engineers are not actively harming human progress by creating human progress, and its insane to say that
Identity provisioning is an abomination that shouldn't have been invented. I used to be a fan back in mid-'00s, self-hosting an OpenID server, without realizing how the whole concept is so fundamentally flawed.
Identity is an innate and inalienable property of individual, not something that anyone else (another person, company/website, government or whoever else) can "provide". They can merely attest by providing a credential, by e.g. issuing a passport.
But does this not make the assumption that the Identity being provisioned is exactly you and only you? I've always seen these identities as my pseudonym on some identity provider and use them in that manner.
I suppose I've used some identities in enough places that it would be hard to deny to certain entities that the identity was mine, but even in that case it's a small subset of entities which have seen the identity that could prove that it's me.
For me personally, it's primarily philosophical - I don't want to be defined by someone else, only confirmed that it's, indeed, me.
However, there are practical consequences. There are plenty of stories how people got their Google/Facebook/Apple/... accounts blocked; or domain name lapsed, for self-hosting folks - and thus lost "their" "identity" (quotes for a lack of better words).
One can back up their credentials and attestations - losing a password (or a keypair) is preventable, since it's all first-party. One cannot back up a third-party service.
This is fair, but I appreciate having an interoperable standard nonetheless. I'm currently designing an auth system based on oidc where the first party service is itself an IDP, but you could also use a third party IDP like Google. So if you're worried about losing access to your Google account, you can use the first party IDP and set up an account, and if you'd rather not set up a new account, verify email, set up 2fa etc, you can just reuse your Google ID. Since they're both standard OIDC flows, it's not hard to support both (and many more)
Of course, I understand the appreciation for an open standard, especially one less messy than SAML. :-) And I can see valid use cases for OIDC when it's all first-party, all in the same system. Maybe if it would use different language, I would have less or no issues with it. I need to think about it... Thanks for giving me some food for thought!
Self-hosting is not a solution, though, unless we're talking about system like yours where - if I understood you correctly - there's no third party. I used to love "classical" OpenID back in mid-'00s (I haven't really thought about it back then, and it was convenient) and self-hosted an OpenID provider. I stopped doing so when OpenID went out of fashion; and later I learned the fact that one cannot own a domain name, merely lease it from authority (despite whatever anyone says, it's just how it is in practice), so tying an identity to that is a bad idea. I-names/i-numbers are also dead. And that's how started to think about it and eventually came to understanding that identity provisioning is a fundamentally broken idea.
OIDC as a machine-to-machine protocol is fine, save for the associated language. But how it works in, what I suspect, the majority of real-world implementations is just conceptually wrong (and practically harmful).
There's a balance to strike between convenience and robustness. Even a legal identity can be stolen, and countries have broad authority to regulate national TLDs and could make owning one just as robust as owning property. The core of an OIDC token is "iss" (issuer e.g. https://accounts.google.com) and "sub" (subject i.e. unique immutable ID e.g. 1234567890). Is that really different from a passport which has at it's core a nation ("iss") and number ("sub")? Sure there's other stuff like names and pictures, but OIDC tokens can have those too.
Having backups is key. The only difference is legal ID's can be backed up without registering the backups with each service ahead of time. Perhaps this could be engineered into OIDC too: a backup field with hashes of other OIDC tokens or something.
There at least a few others left, https://gitlab.com is one I regularly use.
Sadly the amount of money you need to spend on security & support for such a service does make offering such a service (particularly for free) not viable for smaller entities, there are some big economies of scale to make these things viable that work particularly well if you can also get big companies to pay for commercial offerings.
It's using python to do the work but it should be straight forward to implement it in anything you want. Most of the complex stuff is related to decoding and checking the JWT tokens.
I'm using exactly this hand written client on a production project to authenticate with keycloak for like a year and everything's working perfectly!
PS: I know that there are way too many ads to my site. Unfortunately I haven't found the time to properly configure google ads and haven't found anything better :( Use an ad-blocker to read it.
PS. Be cautious with using subjective words like "simple". It can be really off-putting as a reader if you think something is difficult and the author claims it's simple.
I'm still not convinced that OIDC is easy. Keycloak hides enormous complexity and that's not because the developers were bored. One example is the huge number of settings for various timeouts: SSO timeouts, client timeouts, various token timeouts.
The whole monetization and organization around ISO standards feels super shady.
One lesser known hack is to search the friendly Estonian site [1] for a cheaper version of the standard - they often create their own versions of the standards which much pretty contain the exact same content as the original. Unfortunately, in this case, it seems they only are offering the actual standard at a similar price [2]. Sad dog face.
It could be worthwhile to monitor the website to see if they release their own version for a better price in the future. Usually, their prices are ~10% of the original price (one more data point that Estonia does cool stuff).
We deal with the rather shady standardization organizations quite a lot as we work in medical device compliance [3]. I've heard all the usual arguments: "But standardization costs money!", "These organizations are doing good work!", etc., etc. No. I completely disagree. If something's a standard, that in my opinion makes it similar to a law - people should be able to follow it, and that requires people to freely access it. The EU Advocate General seems to agree [4]. And there are lots of standardizations which don't rely on shadily offering PDFs for money: ECMAScript and ANSI C come to mind, but the list goes on.
Review: Mike Jones is one of the 3 members of the OIDC working group. He celebrates the publication of the spec as a publicly accessible standard (PAS) and has worked to include the erratas so that it is a complete document.
ISO standards are not publicly accessible standard. OIDC was always publicly accessible, now I dunno. ISO OIDC is and will be meaningless. ISO is a racket to keep people with no useful skills employed and the organization should be fined for the tax payer money they took and should be sanctioned by every nation in the world.
> Publicly Available Specifications have a maximum life of six years, after which they can be transformed into an International Standard or withdrawn.
So is the plan to transform this into a standard after that period? Was a PAS application chosen because (it sounds like) it goes through the standards body quicker? So this gives an intermediate ISO seal of validation until this becomes a full International Standard?
In my experience KeyCloak can be a very mixed bag.
And if you are especially unlucky might be so painful to use that you could say it doesn't work.
But for a lot of use cases it does work well.
In some context it might safe you not just developer month, but years (e.g. certain use cases with certification/policy enforcement aspects).
But it also can be a story from running from one rough edge into another where you project lead/manager starts doubling or tripping any time estimate of stories involving Keycloak.
E.g. the REST API of Keycloak (for programmatic management of it) is very usable but full of inconsistencies and terrible badly documented (I mean there is a OpenAPI spec but it's very rarely giving you satisfying answers for the meaning of a listed parameter beyond a non descriptive 3 word description). (It's also improving with each version.)
Similar multi tenancy can be a pain, depending on what exactly you need. UMA can be grate or a catastrophe, again depending on your specific use cases. SSO User management can just fine or very painful. There is a UI for customizing the auth flow, but it has a ton of subtle internal/impl. detail constraints/mechanics not well documented which you have to know to effectively use it so if you can't copy past someones else solution or only need trivial changes this can be a huge trap (looking like a easy change but being everything but that)...
The build in mail templates work, but can get your mail delivery (silently) dropped (not sure why, maybe some scammers used Keycloak before).
The default user facing UI works but you will have to customize it even if just for consistent branding and it uses it's own Java specific rendering system (and consistent branding here isn't just a fancy looks goal, one of the first things though on scam avoidance courses for non technical people is, that if it looks very different it's probably a scam and you shouldn't log in).
I think Keycloak is somewhat of a no brainer for large teams, but can be a dangerous traps for very small teams.
I cannot agree more on the very mixed bag. In general, Keycloak is a very solid Authorization server. There is a very active community around Keycloak, and if one need an authorization server, they should consider picking it. We use it in an internal multi-tenant environment as a central customer authentication platform.
BUT there's also a lot of pain in using Keycloak, especially when you build on top of it.
One of the great things about Keycloak is, that it can be seen as an extendible platform. Nearly everything is customizable. However, it takes a lot of time and effort to learn the programming model and especially the documentation is a PITA. It has been even worse 2 years ago, but it's still a lot of fiddeling around and usual I find myself in reading the Keycloak source (which has a very mixed quality btw.) as no documentation is available.
Multi tenancy is indeed also a hard topic. We heavily utilize multiple realms (I think we should have hit 100 already) for that, but newer developments in Keycloak now offer an additional model based around organizations [1].
One thing that bothers me personally is that the storage model of keycloak does not offer zero-downtime migrations and the work on refactoring it has been paused for now.
Since we have a solution based on Keycloak, we deploy Keycloak regulary with every feature. However, the embedded infinispan cache makes this hard on scale as during redeployments Keycloak become unresponsive due to cache leader synchronization and eventually drops some caches, leading to user sessions being terminated. Fortunally Keycloak finally has support for an external Infinispan cache [2] and moved to protocol buffers for serialization [3], which should make upgrades smoother, and grounds the road to zero-downtime deployments.
After all this, let me emphasize that for most of it the Keycloak team is aware and is working hard in moving forward. I started using Keycloak with Version 11 and a lot has changed for the better.
> I think Keycloak is somewhat of a no brainer for large teams, but can be a dangerous traps for very small teams.
I would not agree on that. You don't necessarily need a large team for operating Keycloak and smaller teams are probably not that big of an issue. At least not for operating Keycloak. What I feel to be more a problem is, that many (at least of our) clients are not understanding OAuth 2.0 and OIDC well, and therefore puts a larger burdon on support work. Also there must be at least someone very knowledgable of Keycloak to give guidance to the rest of the team, but in general I would not say it's dangerous. We have even a team operating their own keycloak and using our keycloak to perform OIDC federation (don't ask me why tho, I never understood it).
> Also there must be at least someone very knowledgable of Keycloak to give guidance to the rest of the team
this is why I said large team, you need people with knowledge, but on small teams you likely can neither hire them (if the team already exist) or have resource to spare on giving someone a lot of time to get into it. And having just one specialist would be very risk to actually you need 1.5-2 people. Things become worse if you also need to have plugins and are not a Java workshop at all (as even if you don't write them you should review 3rd party plugins).
Through I guess that doesn't apply if you only do very straight forward thing, like no multi tenant no fancy anything, then a small team work well. too.
I mainly said it can be dangerous as small new startups are often on a very very tight time schedule so the delays yo can run into if you don't have a Keycloak expert can be quite hurtful for them.
We run Apereo CAS pretty successfully. Originally to use the CAS protocol, but now that CAS (the protocol) has been deprecated, we're slowly migrating to OIDC. One sort of weird note about Apereo CAS, OpenID Connect can return data in two format, nested and flat. CAS is the only server I've ever worked with, that defaults to nested. Almost no clients supports this, but the server can be reconfigured to use flat.
KeyCloak is also very good, but I'd run is as a container due to the quick release/update cycle. If I had to do our infrastructure over, I'd probably go for KeyCloak, just because it's the most used.
Yeah, we only have the one issuer, so it's not a concern. For pretty much every KeyCloak project I've done, we have also favored doing separate deployments for separate issuers, so I'd say it's not much of an issue, in most cases.
Regarding the optional standards, no. We've not run into an clients that would require or in many cases even support anything but the most basic OpenID Connect. I'm sure there's a point to supporting them, but I've never seen it being needed for your average use case.
Another closed source option, which can be self-hosted or used as a SaaS: FusionAuth (my employer). It also has a full featured plan which is free if you self host and is available via docker, etc.
Since OIDC is a layer on top of OAuth 2, it inherits its complexity. OAuth 2.1 (currently draft) will help bring some sanity. GNAP - https://oauth.net/gnap/ - will, one day, tie everything.
When I looked at it a few years ago[0] it seemed like a modernization of OAuth (which still uses form posts(?!?)). But I'm worried about uptake, myself. Haven't had a single client request it or bring it up.
The group's other major work, the GNAP RS specification, has passed IESG final review and is in the queue with the RFC editor. This means that it's finished except for some final text edits and will be a final RFC soon too:
The group closed because it didn't have any more work to do.
As for adoption, we've seen it implemented in a handful of spaces around the internet, most of them in places where there are core problems with the OAuth model or GNAP offers cleaner support for some key feature like ephemeral clients or key management.
This is water under the bridge, but it's a bit ironic to me that I had to create an account to comment here. If OpenID 2.0 had succeeded, I would be commenting with my identifier https://self-issued.info/, which remains a valid, verifiable OpenID 2.0 identifier. You can still comment on my blog with OpenID 2.0 identifiers today. Have at it!
OpenID Connect is effectively an evolution of OAuth.