But when you can use them, cookies are demonstrably better. XSS is the main argument against localstorage. Even this article[0], which pillories cookies, starts off with:
...if your website is vulnerable to XSS attacks, where a third party can run arbitrary scripts, your users’ tokens can be easily stolen [when stored in localstorage].
The reasons to avoid cookies:
* APIs might require an authorization header in the browser fetch call.
* APIs might live on a different domain, rendering cookies useless.
CSRF is a danger, that's true. can be worked around. My understanding is that XSS has a wider scope and that many modern frameworks come with CSRF protection built in[1]. Whereas XSS is a risk any time you (or anyone in the future) includes any JS code on your website.
> * APIs might live on a different domain, rendering cookies useless.
That's when you implement a BFF which manages your tokens and shares a session cookie with your frontend while proxying all requests to your APIs. And as said, you "just" have to setup a way for your BFF to share CSRF tokens with your frontend.
Yup, big fan of the BFF. Philippe de Ryck did a presentation on the fundamental insecurity of token storage on the client that he allowed us to share: https://www.youtube.com/watch?v=2nVYLruX76M
If you can't use cookies (which as mentioned above, have limits) and you can't use a solution like DPoP (which binds tokens to clients but is not widely deployed), then use the BFF. This obviously has other non-security related impacts and is still vulnerable to session riding, but the tokens can't be stolen.
CSRF is not as big of an issue as it used to be, and when it is an issue it can be solved more easily and comprehensively than XSS:
1. The default value for SameSite attribute is now "Lax" in most browsers. This means that unless you explicitly set your authentication cookies to SameSite=None (and why would you?), you are generally not vulnerable to cookie-based CSRF (other forms of CSRF are still possible, but not relevant to the issue of storing tokens in local storage or cookies).
2. Most modern SSR and hybrid frameworks have built-in CSRF protection for forms and you have to explicitly disable that protection in order to be vulnerable to CSRF.
3. APIs which support cookie authentication for SPAs can be deployed on another domain and use CORS headers to prevent CSRF, even with SameSite=None cookies.
On the other hand, there are no mechanisms which offer comprehensive protection from XSS. It's enough for a single JavaScript dependency that you use to have a bug and it's game over.
For this reason, OAuth 2.0 for Browser-Based Applications (draft)[1] strongly recommends using a HttpOnly cookie to store the access token:
"This architecture (using a BFF with HttpOnly cookies) is strongly recommended for business applications, sensitive applications, and applications that handle personal data."
With regards to storing access tokens and refresh tokens on local storage without any protection it says:
"To summarize, the architecture of a browser-based OAuth client application is straightforward, but results in a significant increase in the attack surface of the application. The attacker is not only able to hijack the client, but also to extract a full-featured set of tokens from the browser-based application.This architecture is not recommended for business applications, sensitive applications, and applications that handle personal data."
And this is what it has to say about storing the refresh token in a cookie, while keeping the access token accessible to JavaScript:
"When considering a token-mediating backend architecture (= storing only access token in local storage), it is strongly recommended to evaluate if adopting a full BFF (storing all tokens in a cookie) as discussed in Section 6.1 is a viable alternative. Only when the use cases or system requirements would prevent the use of a proxying BFF should the token-mediating backend be considered over a full BFF."
In short, the official OAuth WG stance is very clear:
1. HttpOnly cookies ARE better in terms of security.
2. Storing Refresh Tokens in local storage is only recommended for low-security use cases (no personal data, no enterprise compliance requirements).
3. Storing short-lived Access Tokens in local storage should only be considered if there are technical complexities that prevent you from using only cookies.
OAuth-based auth providers are nice, but they can have a weakness. When you have just one app, OAuth can be overkill: protocol is complex, and users suffer jarring redirects¹.
This is not surprising, because OAuth / OIDC is fundamentally designed for (at least) three parties that don't fully trust each other: user, account provider and an app². But in a single app there are only two parties: user and app itself. Auth and app can fully trust each other, protocol can be simpler, and redirects can be avoided.
It's fair to say that with OAuth the resource owner can choose to display a consent screen or not. For example, when consent is granted already, it can be skipped if the resource owner does not need it. Likewise, Google Workspace and other enterprise services that use OAuth can configure in advance which apps are trusted and thus skip permission grants.
Not to say the concern about redirects isn't legitimate, but there are other ways of handling this. Even redirects aren't necessary if OAuth is implemented in a browser-less or embedded browser fashion, e.g. SFAuthenticationSession for one non-standard example. I haven't looked this up in awhile but I believe the OAuth protocol was being extended more and more to other contexts beyond the browser - e.g. code flow or new app-based flows and even QR auth flows for TV or sharing prompts.
(Note I am not commenting on OpenAUTH, just OAuth in general. It's complex, yes, but not as bad as it might seem at first glance. It's just not implemented in a standard way across every provider. Something like PassKeys might one day replace it.)
For browserless, I was referring to a 2019 article that I could have sworn was newer than that, on the need for OAuth 2.1 that also covers how they added OAuth for Native Apps (Code Flow) and basically a QR code version for TVs: https://aaronparecki.com/2019/12/12/21/its-time-for-oauth-2-...
As for SFAuthenticationSession, again my info might be outdated, but the basic idea is that there are often native APIs that can load OAuth requests in a way that doesn’t require you to relogin. Honestly most of those use cases have been deprecated by PassKeys at an operating system level. There’s (almost) no need for a special browser with cookies to your favourite authenticated services if you have PassKeys to make logging in more painless.
I agree that passkeys would solve all that, but they have their own set of problems (mainly being bound to a device) and they are still very far from being universally adopted.
I’m looking forward to OAuth2.1 - at the moment it is still in draft stage, so it will take a couple more years until it’s done and providers start implementing.
My prediction is that passwords will be around for a long time, at least the next 10 years.
PassKeys are definitely the future, they aren't just device-specific, they can be synced also. https://www.corbado.com/blog/nist-passkeys talks about this, though I'll admit I haven't read anything on the subject yet. But I can say that most implementations of PassKeys seem to cloud sync, including 1Password, Apple, Google, Edge, etc.
I should also add that PassKeys that are tied to devices are like FIDO2 security keys, you should be able to add more than one to your account so that you can login with a backup if your primary FIDO2 token is unavailable.
Likewise, SSO should ideally be implemented such that you can link more than one social network - and a standard email address or backup method - in addition to the primary method you might use to login with. It has always bugged me that Auth0 makes it much harder than it should be to link multiple methods of login to an account, by default.
The biggest issue I've seen organisations facing with PassKeys is that neither iOS or Android require biometrics to unlock one - this seems like a massive drawback.
Most apps wanting extra authentication implement biometrics which fall back to an app-specific knowledge based credential like a PIN or password. As far as I can tell, PassKeys on those devices fall back to the device PIN which in the case of family PCs/iPads/tablets is known to the whole household.
I've seen several organisations give up on them for this reason.
The article by Ory's Aeneas Rekkas perfectly describes OAuth / OIDC problems. The only thing it misses is the suggestion for the alternative protocol for first-party auth. It does suggest that it's preferable to use simpler systems like Ory Kratos. But OAuth / OIDC is a set of protocols, not an implementation. Is there an a effort to specify simple auth protocol, when third-party auth is not needed?
It can vary by implementation need. Can you send a time-limited secret as a login link to someone's email as a replacement to entering or managing passwords? Can you use PassKeys? Or a simple username and password? (Password storage and management is left as an exercise to the reader.)
Part of the question is - why present a login? Do you need an identity? Do you need to authorize an action? How long should it last?
Generally, today, PassKeys are the "simple" authentication mechanism, if you don't need a source of third-party identity or can validate an email address yourself. (Though once you implement email validation, it is arguable that email validation is a perfectly simple and valid form of authentication, it just takes a bit more effort on the part of the user to login, particularly if they can't easily access email on the device they are trying to login on, though even then you can offer a short code they could enter instead.)
Frankly, the conclusion to "how to login" that I draw today is that you will inevitably end up having to support multiple forms of login, including in apps, browsers and by email. It seems inevitable then that you will end up needing more than one approach as a convenience to the end user depending on the device they are trying to sign in to, and their context (how necessary is it that they must be signed in manually vs using a magic link or secret or QR code or just click a link in their email).
I should also note that I haven't discussed much about security standards here in detail. Probably because I'm trying to highlight that login is primarily a UX concern, and security is intertwined but can also be considered an implementation detail. The most secure system is probably hard to access, so UX can sometimes be a tradeoff between security and ease-of-access to a system. It's up to your implementation how secure you might need to be.
To some, you can use a web-based VPN or an authenticating proxy in front of your app, and just trust the header that comes along. Or you could put your app behind Tailscale or another VPN that requires authentication and never login a user. It's all up to what requirements the app has, and the context of the user/device accessing it.
It's probably going to be vendor-specific or you will implement your own auth. At ZITADEL we decided to offer all the standards like OIDC and SAML, and offer a session API for more flexible auth scenarios. You will also be able to mix.
My personal gripe with OAuth is that the simple case can be implemented with like 2 redirects and 2 curls, but docs are often incomprehensibly complicated by dozens of layers of abstractions, services, extensions, enterprise account management and such.
As someone outside of the field I don't follow your argument. The author stated that the problem is not just in the analog nature of quantum computer, but in the impossibly large amount of analog parameters to process and also to correct errors in.
Interesting observation about the maximum lifetime of a bubble:
> I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. That's because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. What's more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and more likely to succeed.
I find your reaction to be a positive sign. The sign that IEEE is willing to push envelope of knowledge (as of 2018). One can't push it without taking a risk of publishing something wrong, possibly very wrong.
I don't have any opinion on this topic, it's not my field of expertise. However, it's the first semi-technical article about quantum computing that I can follow along and understand.
> This leaves ambiguous how to refer to decades like 1800-1809. For these you should ~~specify the wildcard digits as “the 180*s” manually~~ write out the range. Please do not write “the 181st decade”.
I think it wouldn't be wrong to say "the 1800s decade" or "the primus 2000s" or "the alpha 1900s"
I don't find it persuasive. First of all, people are not going to start writing and saying wildcards. Second, there is an established convention to say "2000s" and mean 2000-2009.