Hacker News new | past | comments | ask | show | jobs | submit login
Introducing TAuth: Why OAuth 2.0 is bad for banking APIs and how we're fixing it (teller.io)
311 points by AffableSpatula on May 5, 2016 | hide | past | favorite | 177 comments



1. To my mind, the fundamental problem OAuth solves is "letting a user decide" to share data with an app, without making the user responsible for jumping back and forth between the app and her API provider (her bank, in this case). OAuth holds the user's hand through a series of redirects, and the user doesn't have to copy/paste tokens, or remember where she is in the flow, or know what comes next. Does TAuth have a similar capability? The blog post mentions "User Tokens" in passing, but doesn't define or describe them.

2. OAuth 2.0 is published as an RFC from IETF. It may be a bear to read (and yes, it's a framework rather than a protocol!), but the spec is open, easy to find, and carefully edited (https://tools.ietf.org/html/rfc6749). Is TAuth meant as a specification, or a one-off API design? If it's a specification, has there been an attempt to write it down as such?


On 2: from the OP, "TAuth is available in production today for our existing beta users and we've already begun the work to make it an open standard we hope the industry adopts."


I disagree with your first point. The fundamental problem OAuth solves is secure authentication. OAuth 2.0 provides this at a bear minimum as it can be broken in any way SSL/TLS can be broken.

The argument the author is making is that this level of security is not sufficient for a bank. I think I agree with this statement.

For general use, OAuth 2 provides a sufficient level of security since the platforms that use it are usually only as secure as TLS, too.


Nope, authorization. Authentication is left as an exercise for the implementer.

Some people use the ability to be authorized to access an account on e.g. Facebook as a stand-in for authentication, but that's a different issue.


Is that not a fair thing to do?

Can we not assume someone with access to a Facebook account is authentically the owner of that Facebook account for all of our intents and purposes?


As it is right now, yes. But imagine a scenario where Facebook might implement a child account where a parent has access to monitor the usage.

Now there are two people with authorisation to access this Facebook account so the process no longer unique authenticates a single individual.

Of course this is a contrived example and I'm sure there are better examples. But this is why OAuth is authorisation and not authentication and why something like OpenID Connect exists on top of OAuth2.


Authentication is to prove who you are - authorization is to just have permission to do something, as a subset of authenticated rights.

You would generally be much less rigorous in the Facebook example between giving someone access to your shared photo albums than to your account settings. Having an oauth token does not make you signed into Facebook at all, but just says that you have valid rights to do something.


One big problem with OAuth on mobile apps is this scenario. I've seen this in the wild for non security-critical apps. As far as I can tell, it's not a bug so much as it is a problem with the OAuth protocol and webview permissions:

1) MyLittleApp wants OAuth access to BankOfMars

2) MyLittleApp bundles BankOfMars SDK into MyLittleApp

3) MyLittleApp requests oauth access via SDK

4) SDK opens WebView for user to log into BankOfMars

5) MyLittleApp has full control over the DOM presented to the user since the WebView is technically its own.

6) MyLittleApp extracts the user's password from the DOM of the WebView

7) MyLittleApp disappears and... profit?


The flow we (Mondo) are considering for this:

1. MyLittleApp opens web view to log into Mondo

2. User enters something to identify themselves into the web view (eg. email address, phone number)

3. We dispatch a notification to the user's registered device (ie. the Mondo app where the user is logged in – this may be the same device or a different device)

4. User opens the Mondo app and accepts/rejects the authorisation request

5. User returns to MyLittleApp, OAuth flow completes

In this flow, the user is not exposing their login credentials to the app… at worst, the app could extract their email/phone number. It also introduces another factor into the auth flow: the user's registered device.


I have two factor auth configured on my most sensitive logins (Microsoft account, Google account, Lastpass), and it works almost exactly the same. Except I use a standard Authenticator app. No need for proprietary protocols or apps.


This is unnecessary - if you are assuming that people will have Mondo app already, then the SDK will simply trigger an intent to open the app.

The reason webviews are present in the SDK is because of the possibility that your app is not installed.

You cannot depend on SMS pin without your app, because that can be spoofed. I don't see any option except for manual pin copy paste from the app to a well known website that can be opened in the browser.


The user may be on a desktop, or a different device. We want to support such flows too.

I agree in the "native app on same device" situation, we can bypass all this by bouncing straight to our app though.


You misunderstand. I'm not advocating for an app only flow. I'm suggesting that your own flow is restricted to work well only if the Mondo app is preinstalled ... In which case your flow is useless anyway.

I think you should talk about the case when a customer does not have Mondo app installed and another app asks for authorization to access. How will the auth flow work.


SMS spoofing is a problem if the flow depends on the user sending an SMS, and generic impersonation (by the client app) is a problem if users have to do something like click a link in the SMS and enter a password. But if a phone number is already associated with the account, and the only SMS is one from the service to the user containing a randomly generated login code, it should be safe, no? At least, that's the flow everyone and their dog seems to use these days.

Of course, that doesn't work without a phone.


The problem is hidden in what you wrote.

> and the only SMS is one from the service to the user

There is zero possibility of a customer actually recognizing a phone number...or even a shortcode. Assume that whatever SMS comes in your phone is going to be trusted by users. However the one thing that people will know how to do is Google their banks name and go to the corresponding website.


The customer doesn't need to know the number, that's the point. The only sensitive info is what is being sent, so there is absolutely nothing to gain by spoofing a number and sending junk to the user.

It's like me telling you I'm going to email you the passcode to a gate. If Bob overhears this and knows your email and my email, he still isn't in a position to do anything if he can spoof emails. Best case is he sends you the wrong code and you don't get access until you get the real code from me.


The problem we are attempting to solve is that a third party app is demanding access to your bank account, but you only want to enter your password on the bank page. So the trust that needs to be establishmed, is that a page (being shown by the app through webview) is a genuine bank page and you can go ahead and put in your password there.

Bob is impersonating you in the first place and asking for the password. How do I know whether it is you or him?


Bob sends me a code, I type it in to his phishing site. So what? It doesn't actually result in him gaining anything useful because I still haven't revealed any banking login credentials.


PIN Code Auth is a way to deal with this as well. Similar to how a Roku works on Netflix or something else.


What's to stop the app (or anyone in the context of a rooted device) replacing the legit Mondo auth page with a shim? Nothing.


What does replacing the auth page gain the attacker? At worst the user enters their email address into a phishing page. The important thing is that there isn't a password :)


>What does replacing the auth page gain the attacker?

If the spoof app has a "Connect with Twitter* (and you don't have the Twitter app installed), and then a webview is opened, the spoof app can replace Twitter's login page with their own, and capture the username and password.


In the proposed implementation above, the _only_ piece of information that a user enters inside the web view is a username. The user must then use the native Mondo app on a previously-authenticated device to complete the OAuth flow. The Mondo app could also require biometric (ie. Touch ID) authentication.

While a malicious application can inject JavaScript to intercept the username, this alone is useless to an attacker.


Well, the malicious application can inject a password-field, and an unsuspecting user might not realize that (s)he is giving an app/attacker the password, and not the correct third party.

User education only goes so far, this type of attack can also make a web view that traditionally asks for a TOTP one-time password code susceptible to leaking a users password, even if the normal login flow doesn't ask for that password.

[ed: note that it's pretty trivial to eg: set up hidden cameras in voting booths, if you want to spy on a few people, or perhaps have people film themselves in a voting booth - the point is rather that if most people make an effort to follow the common rules wrt. voting booths, the system is reasonably secure. And it's not trivial to make similar claims about a (presumably) centralized on-line system.]


On Mondo, there simply are no passwords at all. Instead, when the user wishes to log into our first-party apps, we send a login link to their registered email.

We'll almost certainly add additional required factors to this process (eg. biometrics), as we see the user logging into the Mondo app on a new device as one of the most critical from a security perspective.


I hope that's not biometrics in a potentially attacker-controlled web-view (if such a thing is possible) - biometrics are difficult to revoke...


As I mentioned earlier..this is predicated on the user having installed a Mondo app. That is not workable.

You must have a flow where this works without your app being installed.


Why? In order to have a Mondo account, the user must have our app installed.


because people will install your app and then uninstall it. However they may still retain the OTHER developer's app that includes your SDK. This is just how customers behave.

If your flow is blocking on Mondo app being installed - that's fine. This means that the surface area of attack is restricted around your app. That's totally OK.

However - that is a very different positioning than oauth. I would say Oauth will degrade gracefully to your protocol if the endpoint is restricted to another app that must mandatorily be installed on the host device.


Yup. That's every app's problem though. I can also create a webpage and design a fake "bank login" inside of it to make you enter your credentials there. There is nothing you can do about that other than educating the user.


Yes, but browsers design goal is to allow educated users to distinguish fake and legitimate sites (URL bar should never be forgeable). With mobile apps, the app can display anything and there is no way for even well educated user to recognize forgeries.


The aim is therefore to remove the reliance on it being genuine. If you don't have passwords then there is no passwords to steal.


I think that's Android/iOS responsability.


The OS can't know the intent of the view for certain, so no.


Ok, so in that case the malicious app is able to steal your password... but if login requires two factor auth, the password is useless without also having access to SMS (or whichever 2fa solution) on the associated device.


This describes the flow for a "trusted client", which is not the correct flow to use for securing this scenario. Instead, the "untrusted client" flow looks like this:

    1) MyLittleApp wants OAuth access to BankOfMars
    2) MyLittleApp bundles BankOfMars SDK into MyLittleApp
    3) MyLittleApp registers itself with BankOfMars, 
       which then has sole discretion over whether 
       to allow it to access data hosted by BankOfMars.
    5) If approved by BankOfMars, MyLittleApp can now 
       request oauth access to a user's data 
       with the BankOfMars SDK.
    6) SDK opens WebView for user to approve MyLittleApp's
       request to access user's data hosted by BankOfMars.
       User may reject the application's request.
    7) If the user approves the application's request, 
       the user is then prompted for authentication. 
       This can be in the form of a username/password, 
       but may also include two-factor authentication 
       or whatever BankOfMars deems necessary for security.
    8) Should BankOfMars or the user choose to do so, 
       either can revoke the right of MyLittleApp 
       to access BankOfMars data.
Now, with this said, this is actually one of Eran Hammer's criticisms of OAuth2: It's hard to get all of these pieces just right! Good security should be easier.


Chatmasta's (OP) flow is not incorrect, he just didn't describe it all. It is perfectly valid. The issue comes when the SDK opens the auth server's authorize endpoint in a WebView. In all cases, if the user is not authenticated with the auth server, he will need to log in. Technically this should be done via a browser redirect on the same page, not a WebView. So instead of the SDK opening a WebView, it should redirect the full browser window to the auth server's authorize endpoint, which will prompt the user to authenticate if he doesn't already have a session.

This is a problem with mobile apps unfortunately, since this type of browser interaction is going to be all over the place. For web apps it works just fine.


On #7, don't let the potentially untrusted app host the view in which the password is entered.


OAuth is, like much of authentication and authorization that has been well marketed, very technically flawed. It makes a big show of not trusting the recipient application with credentials, but anyone who's actually interested in stealing creds still can.

Plus it's way more complicated and has way more failure scenarios than simple password auth.

The only saving grace that I saw was that a service no longer has to store users' passwords to other systems, for persistent interaction with their data. I think this is really why people bother using it.


It makes a big show of not trusting the recipient application with credentials, but anyone who's actually interested in stealing creds still can.

How so, if you're in a browser and you check the URL? It's only flawed here because the app controls the browser itself, but that wasn't the original use case of OAuth.


Username/Password is still the biggest security hole. With or without OAuth.

One way to circumvent that would be to enforce password change after any oauth authorization, but that's not very user friendly.


Is #6 and #7 something you've seen happen, or conjecture about what might happen if some nefarious actor manages to develop a native app that requires banking integration, gets people to download it, then gets people to plug in their banking info?


I've seen DOM manipulation in the wild for other purposes (e.g. clickjacking for sending invites to friends). Beyond that, mostly speculation, yes. But there's nothing preventing the DOM hijacking from grabbing the password right out of the <input> field. (Unless there's some HTML5 permissions on those fields that I'm unaware of, certainly possible).


The other thing here IMO is if I'm a bank, and I have a 3rd party app making OAuth requests... I'll want to verify code before mass release. Or at least limit the amount of bearer tokens issued without approval (Nest does this for example).

Someone should internally want to review code, especially for a bank.


As a bank, if you provide an SDK for 3rd party developers to use, you are not in a position to review the app before release. Only Apple/Google gets to see that code.

The proper solution would be either 1) the ability to register 3rd party libraries with apple and require some kind of integrity check before approval (but even then, the 3rd party app could override library methods at runtime), or 2) code signing the binary blob library separately for every 3rd party developer (but then the problem is enforcement of where developers get the library from -- how do you verify SDK integrity from the bank server side?)

The fundamental problem is that, as soon as you give 3rd party developers the ability to natively integrate with your service via an SDK in their own app, you are playing a cat and mouse game.


It depends if that SDK has an API key barrier or not. I mean if I'm providing 3rd party access, I want to see how each individual app is performing and what it is doing.

But yes, aside from that potential difference, I understand what you're saying.


The presence of an API key is not sufficient to verify the client loaded your SDK with the same checksum as the one you released. The client can modify the code of the SDK to perform any arbitrary logic, including bypassing integrity checks.


2FA helps here.

Also, it would help greatly to be able to generate multiple auth users on your bank account (e.g. a read-only identity for giving access to MyLittleApp). I have seen this occasionally on banking portals, but it's very rare.


Wouldn't this be a reason to promote the use of (trusted) web browsers and web applications instead of native apps or third party API's for high security scenarios? While TLS verification is something that is not flawless, it is something that users are being constantly reminded of by their banks and governments (check the domain name, check the green lock next to it), and modern web browsers go to great lengths to improve the user experience for keeping an eye on the validity of a website.

When I pay for something at a webshop via my bank account using a common standard created for that purpose (IDEAL in the Netherlands, other countries have similar systems) I get forwarded to my bank's authentication service to authorize that payment. I can clearly see that the TLS certificate belongs to my bank, and my browser is content that it is valid.


This is why the OAuth Identity Providers that take security most seriously do not allow WebView login (or at least provide an SDK where you don't need it).


How exactly do you prevent a user agent WebView in an app from spoofing iOS Safari?


It's very hard to do but you can at least make your SDK do a fast app switch to safar rather than use a WebView. Of course evil apps can find a way around it like you said but at least you can make your SDK and Documentation point the right way,

I am not experienced with iOS but I also suspect there are more advanced WebView detection tricks. It also doesn't help that Apple really doesn't like fast app switching.


you email the authn link to the user rather than use openurl


SafariViewController on iOS and CustomTabsClient on Android solve this problem.


How exactly does one do this in iOS? There is a reason that Apple only allows Safari to power WebViews. How exactly would you extract the user's password from the DOM of the WebView on a non-jailbroken device using ANY APPLE APIs if the page was legitimately loaded from https://yourdomain.com ??


If the web view is inside your app, you can inject JS easily.

http://www.priyaontech.com/2014/12/native-%E2%80%93-js-bridg...


or just use custom protocol and sniff it all.


yeah, we need some kind of email version of oauth - where the id provider emails a link for the user to click on to authn and approve the auth code to go to the client app. it's a bit weaker UX, but your described scenario is a massive gaping hole.

mmm, thinking about it, might be compatible with the current spec.


In my opinion, having worked extensively with OAuth2 (mostly in the form of OIDC) and other modern AuthN/Z protocols, the author of this post does not truly understand OAuth 2, nor have they looked in any appropriate depth into supplements like OIDC or alternatives.

For one, bearer token [1] is only one type of "Access Token" described by the OAuth2 spec [2]. In fact, the OAuth2 spec is very vague on quite a few implementation details (such as how to obtain user info, how to validate an Access Token), which the author seems to just assume are part of the spec, as he does with bearer token. Other parts, like the client/user distinction, and the recommendation for separate validation of clients, the author ignores completely, generating his own (ironically mostly OAuth2-compliant [3]) spec.

> Shared secrets mean no non-repudiation.

Again, not true. Diffie-Hellman provides a great way to come to a shared secret that you can be cryptographically sure (the adversary's advantage is negligible) is shared between you and a single verifiable keyholder.

> Most importantly using JWT tokens make it basically impossible for you to experiment with an API using cURL.

sigh. If only there was a way to write one orthogonal program that can speak HTTP, and in a single cli command send that program's output to another program that can understand the output. Maybe we could call it a pipe. And use this symbol: |. If only.

> OAuth 2.0 is simply a security car crash from a bank's perspective. They have no way to prove that an API transaction is bona fide, exposing them to unlimited liability.

TL;DR: This article, led by comments like this ("unlimited", really?), strikes me as pure marketing (aimed at a naive audience) for a "spec" that probably would not exist had proper due diligence into alternatives, or perhaps some public discussion, occurred. At the very least, inconsistencies (a few of which I've mentioned above) could have been avoided.

[1] https://tools.ietf.org/html/rfc6750 [2] https://tools.ietf.org/html/rfc6749 [3] https://tools.ietf.org/html/rfc6749#section-2.3.2


As a developer currently working with OpenID Connect (OIDC) and JSON Web Token (JWT), using curl is indeed not a problem at all:

    curl -i ... // Perform authentication to obtain JWT
    export JWT="eY..." // Place JWT in a shell variable
    curl -i -H "Authorization: Bearer $JWT" ... // Call your API
That's all there is to it.


Exactly, perhaps even simpler:

   curl ... // perform authentication to obtain JWT | xargs -I TOKEN curl -H "Authorization: Bearer TOKEN" ... // Call your API


Author here:

> For one, bearer token [1] is only one type of "Access Token" described by the OAuth2 spec [2]. In fact, the OAuth2 spec is very vague on quite a few implementation details (such as how to obtain user info, how to validate an Access Token), which the author seems to just assume are part of the spec, as he does with bearer token. Other parts, like the client/user distinction, and the recommendation for separate validation of clients, the author ignores completely, generating his own (ironically mostly OAuth2-compliant [3]) spec.

Last time I checked other access token types were still drafts and bearer tokens were the only stable kind.

> Again, not true. Diffie-Hellman provides a great way to come to a shared secret that you can be cryptographically sure (the adversary's advantage is negligible) is shared between you and a single verifiable keyholder.

I as a bank cannot attribute liability for an erroneous transaction to a developer if with both share the secret with which a signature is computed. If I as a bank am compromised and want to cover my arse by moving the blame to a poor external developer I can do that with a shared secret by forging signatures after the fact. This is precisely why I don't want shared secrets.

Even if your point is valid re DH, why push that up to the application level and reinvent the wheel when you can get the same benefits, less intrusively by using a battle tested protocol circa 20 years old?

> sigh. If only there was a way to write one orthogonal program that can speak HTTP, and in a single cli command send that program's output to another program that can understand the output. Maybe we could call it a pipe. And use this symbol: |. If only.

This is shit developer experience. Why bother with a Rube Goldberg sequence of piped commands when you can just curl?

Finally, despite everything you say no OAuth 2.0 based protocol can guarantee privacy. People like that privacy when it comes to their finances I find.

Sorry for any typos, I'm on the move. Thanks for your comments :)


> This is shit developer experience. Why bother with a Rube Goldberg sequence of piped commands when you can just curl?

If you are a developer tasked with working with web security related techniques, I would expect being able to use Bash, Curl, and anything needed to string together a couple of HTTP requests on the command line before your first cup of coffee of the day to be the least requirement of their skill set.


Additionally, the kind of developer who uses cURL is not, in my experience, the kind to shy away from using bash pipes or writing a quick bash alias or function to streamline that process if they have to do it more than a few times.


My DH comment was a bit aside the point I probably should have made. (Also, apologies for my sarcasm re: bash pipes -- that was unnecessary and probably unproductive).

> Public key cryptography can be used with JWT tokens but they don't solve the problem of how the client will generate key pairs, demonstrate proof of possession of the private key, and enrol the public key with the API.

JWT is not in any way attempting to solve the problem of client identity and authentication. Rather, the question of federated user identity and how to validate that the identity assertion came from a trusted source (where the PKI and assertion signing comes in).

Furthermore it is signed with, among other assertions, the audience assertion so that you can cryptographically verify that a token was given with the authorization of a user by a trusted service (your JWT provider, via whatever authN methods it allows) and to a given client. This should give a substantial enough audit trail to enable reasonable proof that an end-user authorized a client (which itself had to authenticate to the provider) to perform an API transaction if it can be proven that the signature was validated and that the token issuer was clear about exactly what the user was giving the client authorization to do.

OAuth2, OIDC, and all modern standards I'm aware of also require client validation of some form. From the OAuth2 spec:

    Confidential clients are typically issued (or establish) a set of
    client credentials used for authenticating with the authorization
    server (e.g., password, public/private key pair).
This implementation is unspecified in OAuth2 but could (and in your case probably should) certainly include digitally signing each API request (much like twitter and amazon require) with a private key and validating the signature against the client's registered public keys as well as the constraints (especially audience, scope, and expiration) given via the token.

If your goal is to provide a non-repudiable audit trail of user identity and authorization and client identity and agency (authorized by the user to perform X) then OIDC, JWT, and client AuthN via request signing with registered keys should be more than sufficient to avoid liability in the case of rogue clients or shady users. As always, the audit trail is the most important piece, along with sound crypto and standard practices that have been audited by appropriate experts, so that your audit evidence cannot be reasonably called into question.


I completely agree with everything you said and know a bit about OIDC too, but since I'm far from mobile/3rd party apps development, there is one thing I don't understand: client credentials (confidential client) in case of mobile app installed from the marketplace. How is it done? You'd need dynamic client registration, right? I know there's a spec for that and I think I understand the mechanics. That would let you identify the client but not sure you can ever identify app developer with it (if needed for audit purposes). Or am I missing something, maybe?


I think for that you'd probably need even more robust client authentication, like allowing clients to give you a CSR for their own CA, which can sign CSRs generated by each app installation and the chain can be traced from individual app, through the developer's CA, back to a trusted internal root certificate.

That lets developers maintain key confidentiality (devs keep their CA private keys) and maintain control over the app installations' access to signed certs (as well as cert lifetime).

Even if it's not a CA, OIDC has some brief words on signing JWT with a registered keypair, which gives a similar, though less robust, ability to keep the private key secret.

No matter what, any of these scenarios still involves figuring out a way to trust the installed app is authorized by the resource owner and the client developer to obtain a signed cert/token (thus shifting real financial liability onto them in OP's scenario). Which probably means requiring the end user to register for your service also, validating the user again rather than the app.

The fundamental fact remains that the human mind remains the only truly secret place, which is why passwords aren't going anywhere, and why DRM solutions have to rely on making it illegal to attempt to obtain the decryption key embedded in device, or making attempted recovery involve physical destruction of the key.


Yeah, thinking about the same. But then I saw the OP mentioned somewhere in the comments below that he's only thinking about server to server scenario (2LO/client credentials), so its those comments above discussing fake login UIs confused me that it was about 3-legged flow.


I had the same questions, and it's very hard to find the answer - took me a very long time to piece this together but this is how Google does it:

1) You create a "normal" client in Google Developer console (i.e. a web client) 2) You create a native/Android client in the same project. This client is shared across all phones. 3) You add a scope of audience:server:client_id:$NORMAL_CLIENT_ID to auth requests from the mobile. 4) You get back token minted for the web client, from the native client!

This is how it works:

https://developers.google.com/identity/protocols/CrossClient...

The reason it is safe is because you can only do the cross client stuff from a mobile client, which disallows any redirect urls except for localhost and a couple of other special URIS (see https://developers.google.com/identity/protocols/OAuth2Insta...)

It's ok that the secret is not really secret because it's not possible to use it to making a Phishing site since the redirect URL is localhost.

I guess that doesn't answer your "how does it identity the app developer" but it does tell you how these things are deployed at least, and the important fact that there's just one client (not one for every device)


I understand that. Problem is that I can "steal" other dev's app client_id and use in my app. So it seems impossible to use such client_id for auditing/evidence. With a web client I cannot do that since I don't own the domain, so I can be proven to be a party in some transaction


They should allow for push notifications. That'd be more secure

At the end of the day though, everyone has to sign their apps with certs that are pretty well validated. So, it really cuts down on funny business like you mention.


The official recommendations for native apps are here: https://tools.ietf.org/html/draft-ietf-oauth-native-apps-01

They suggest using PKCE (challenge-repsonse) https://tools.ietf.org/html/rfc7636 to authenticate clients that can't be trusted with a client secret.


the problem with this is you're rewriting client ssl, which isn't a very good idea considering how battle tested and hardened it is. how do you know you've dealt with all the edge cases that tls spec writers have been wringing their hands over for like forever. why do a one off impl of client auth inside the oauth protocol? why not use extremely well baked pre-existing infrastructure like webcrypto? it's not like you're magically going to be compatible with everything just because you follow the overly vague guidelines of oauth2. all you get kudos for is reading the spec - but how does that help your users?

security is a very hard problem, especially asymetric crypto security. rolling your own is generally not advised.

If there is a way to define and use client tls according to the current spec, that would be best. if not, I agree that it's probably a good idea to create a new spec

I agree though that the curlness of the spec is orthogonal to the discussion.


I don't think you read the piece?

- I am not reinventing TLS - This has nothing to do with OAuth - It uses WebCrypto for key gen and CSR signing


I was replying to stuart, not to you. He was trying to re-invent tls.


> Finally, despite everything you say no OAuth 2.0 based protocol can guarantee privacy. People like that privacy when it comes to their finances I find.

What makes you say that OAuth 2 cannot guarantee privacy? I think you must have a very different definition of privacy than I'm used to if you can make this claim.


the premise is that mitm attacks are easy.


>This is shit developer experience. Why bother with a Rube Goldberg sequence of piped commands when you can just curl?

For which developers?


The ones I spoke with when doing user research.


IMO, if they consider that too complex to handle, then application security is too complex for them to be working on too. I wouldn't want a developer who doesn't even know how to handle pipes, to go anywhere NEAR apps needing secure implementations.


If we can get an app for banking, I wonder if we can ever get an app for voting. A vote would seem to be less valuable than access to an enormous sum of money, yet one is available to be instantly and the other requires standing in line.


Nope. If you control the votes, you control the money, the laws that make it, the military (including intel services, alliances & secret police), plus the rest. You don't even need to buy a presidency then. Once you control votes, you can privilege escalate to do almost anything. There's probably nothing else we should be as truly concerned about securing properly. It's already been rigged and EXTREMELY probable it was exploited for George W. Bush's election too. Electoral security is an absolute joke and a travesty.

https://en.wikipedia.org/wiki/United_States_presidential_ele...

https://en.wikipedia.org/wiki/Premier_Election_Solutions#Con...


The difficulty is not secure voting, the difficulty is a secure, secret vote. So that the person that cast the vote, can't prove which way they voted ("Hey, see, I voted for you, now pay me the 10.000 USD you promised me"), that other's cant prove which way any given person voted (or indeed, prove that the vote wasn't blank) ("Put him on a black-list, he voted for $wrong candidate!").

As far as I know, secure on-line voting is still an open research question (and that's just the theoretical bit, never mind building a real, concrete system).


Some features that I think a system like this should have:

1. The client (or the device holding the authentication token, or the app, etc) should be able to maintain (on its own storage!) an audit log of all transactions it has authorized, that log should be cryptographically verifiable to be append-only (think blockchain but without all the Bitcoin connotations), and the server should store audit log hashes and verify that they were only ever appended to. And the server should send a non-repudiable confirmation of this back to the client.

Why? If someone compromises the bank or the bank-issued credentials (it seems quite likely that, in at least one implementation, the bank will know the client private keys), the client should be able to give strong evidence that they did not initiate a given transaction by showing (a) their audit log that does not contain that transaction and (b) the server's signature on that audit log.

2. Direct support for non-repudiable signatures on the transactions themselves. Unless I'm misunderstanding what the client certs are doing in this protocol, TAuth seems to give non-repudiation on the session setup but not on the transactions themselves. Did I read it wrong?

3. An obvious place where an HSM fits in.

How does TAuth stack up here?

Also, there's a very strange statement on the website:

> to unimpeachably attribute a request to a given developer. In cryptography this is known as non-repudiation.

Is that actually correct as written or did you mean "to a given user"?


> an audit log of all transactions it has authorized, that log should be cryptographically verifiable to be append-only (think blockchain but without all the Bitcoin connotations)

A blockchain related technology is overkill, you just need forward integrity: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.111...


"The URL does not match any resource in our repository."


1. Is an awesome idea. Will it work for the tail-end of the log where most maliciousness will occur? The scenario I see is the hacker grabbing the log, and appending transactions to its copy of the log. When you go to contest those transactions, the bank will have a longer log than what your client has. Don't get me wrong there's a lot of use-cases for this: You received a confirmation that a transaction was cancelled but the bank said it didn't happen: In this case you have proof through the signed log that in fact it did happen.


I would expect the local device audit log to store the server's signed acknowledgement too. With 2-party asymmetric signing this should be pretty much airtight.


A rollback attack is always possible: the bad guy backs up his/her device, does a transaction, and restores the device. (A replay protected memory block + secure enclave can make this hard, but never impossible, to do.) This means that you can't make an ironclad assertion that the very last transaction the bank sees was fraudulent, because you can't be trusted to make such an assertion.

But you're still protected against transactions alleged to have occurred before your last real transaction or, equivalently, you're guaranteed to (in theory) notice the fraud the next time you try to do a genuine transaction.


> the bank will know the client private keys

No one knows the private key other than the creator, in this case the developer.

> Direct support for non-repudiable signatures on the transactions themselves. Unless I'm misunderstanding what the client certs are doing in this protocol, TAuth seems to give non-repudiation on the session setup but not on the transactions themselves.

There is nothing stopping the API provider enforcing layer 7 signatures too, it's an application concern. The same private key can be used to compute those signatures or as X.509 certs can embed arbitrary public keys, you can choose another one for transaction signatures.

> Is that actually correct as written or did you mean "to a given user"?

The first version of TAuth is for Server to server apps. In this case it means developer. I will clarify that in the post. Thanks.


>> Direct support for non-repudiable signatures on the transactions themselves. Unless I'm misunderstanding what the client certs are doing in this protocol, TAuth seems to give non-repudiation on the session setup but not on the transactions themselves. > > There is nothing stopping the API provider enforcing layer 7 signatures too, it's an application concern. The same private key can be used to compute those signatures or as X.509 certs can embed arbitrary public keys, you can choose another one for transaction signatures.

So what, exactly, is non-repudiable? If I go to a highly enlightened court wielding a signature, what can I prove to that court? That my app really did connect to your server at the time I allege it did? This seems weak to me.


As much as I love Stevie, teller.io and this demo: Why not both?

OAuth 2 is not "bad" in general, you just need to consider the implications of using it. If you have an API that allows clients to move customers' money or take out loans, you should take additional steps to defend against MITM attacks. For example using client side certificates :)

That said, TAuth looks really good and tidy. Of course the developer may still lose the private key, so in the end you'll always need to additionally monitor API requests for suspicious behaviour.


Hey Jonas! TAuth is simpler than OAuth 2.0 and doesn't suffer the same security issues. So… why use OAuth?


The devil you know I suppose ;)

IIRC we didn't go too far down the client cert route because we're behind CloudFlare and we like it that way. Something to revisit in the future.


The three-legged flow from OAuth is widely needed. (I would agree with sticking to earlier versions that allow more specific tokens though)


The main complaint about OAuth 2.0 seems to be that bearer tokens are a bad idea. Well, you can implement OAuth 2.0 to use any kind of token you want, with any property you want. People do bearer tokens because it is easy, not because it is required.

The secondary complaint seems to be that OAuth 2.0 is a mess. That one I heartily agree with! A few years ago I wound up having to figure out OAuth 2.0 and wrote http://search.cpan.org/~tilly/LWP-Authen-OAuth2-0.07/lib/LWP... as the explanation that I wish I had to start. In the process I figured out why most of the complexity exists, and whose interests the specification serves.

The key point is this: OAuth 2 makes it easy for large service providers to write many APIs that users can securely authorize third party consumers to use on their behalf. Everything good (and bad!) about the specification comes from this fact.

In other words, it serves the need of service providers like Google and Facebook. API consumers use it because we want to access those APIs. And not because it is a good protocol for us. (It most emphatically is a mess for us!)


> One of the biggest problems with OAuth 2.0 is that it delegates all security concerns to TLS but only the client authenticates the server (via it's SSL certificate), the server does not authenticate the client. This means the server has no way of knowing who is actually sending the request.

That's not just plain not true. In the OAuth2 authorization_code grant, a "confidential" client is REQUIRED to send a client_id and client_secret to authenticate itself to the server.

https://tools.ietf.org/html/rfc6749#section-4.1.3

> If the client type is confidential or the client was issued client credentials (or assigned other authentication requirements), the client MUST authenticate with the authorization server as described in Section 3.2.1.

Now, this doesn't work for "public" clients like a pure-javascript webapp, but that's a separate question.

Count me as pretty dubious of letting some unknown group try to re-implement bank authentication without fully understanding the specification they're trying to fix.


> That's not just plain not true. In the OAuth2 authorization_code grant, a "confidential" client is REQUIRED to send a client_id and client_secret to authenticate itself to the server.

All secrets go over the wire, which is protected with TLS. Ultimately the security is delegated to TLS. You're simply wrong here.

> Count me as pretty dubious of letting some unknown group try to re-implement bank authentication without fully understanding the specification they're trying to fix.

Your misunderstanding is also indicative that OAuth 2.0 is too complicated.


You're correct that OAuth2 ultimately delegates all security to TLS--if that concerns you, you're better off using OAuth1a that has its own signing/verification protocol.

The statement in the OP that:

> the server does not authenticate the client. This means the server has no way of knowing who is actually sending the request.

is incorrect as written. [In the case of a confidential client] The server does authenticate the client, and it does know who is making the request.

If you're going to claim that TLS-protected authentication somehow counts as "does not authenticate the client" then I guess you'll agree that Gmail "does not authenticate" my IMAP client when it makes a TLS-secured connection and sends my 'app password' over the wire.


Their description of the MITM attack is entirely dependent upon how the authorization server validates redirects in the implicit and authorization code grant flows. This is tied to how client registration is performed. So, if you want to ensure that the authorization code or access token is only delivered to a redirect URI that is trusted, that should be part of the policy enforced in your infrastructure... More specifically, you can require domain verification and validation as part of the client registration process, and I would expect that at a minimum when dealing with delegated access to financials.

Another alternative to this would be to perform an OOB flow, wherein the redirect URI is actually hosted on the authorization sever itself and the client can scrape the access token from the Location header.


These are two separate things: MITM and open redirect. A MITM attack is not on the the auth code, but on the bearer token.


By design, OAuth2 doesn't allow for open redirects: this is just part of how clients are registered. What I'm getting at is not strongly validating the registered redirects on a sensitive client, which can lead to leakage of the access token in the implicit flow and the authorization code grant flow. Once you perform that intercept, the token may be presented by a malicious third party until it expires or is revoked.


This is unnecessary. Many banks can and will enforce 2-factor authentication with their oauth flow, which sufficiently validates the client and would prevent a MITM attack.

Your whole premise is surrounded by the threat a client browser would not properly validate a server certificate... come on... really?


You do know about phishing, right? There are many ways to get a user (not client) to accept an invalid cert, and some cases where a client will accept an invalid cert.

They want cryptographic proof of client identity. That means somehow the client has to prove they are the real user and not an attacker who intercepted the connection somehow (which, again, is completely possible). Client certs are a way to verify with each message that the user themselves, using their private key, validate what's going on, and that the message they validated came from the real server and not a fake intermediary.

This is different from 2fa because 2fa is authentication of identity that only happens once and does not provide cryptographic proof of identity. TOTP will give you something closer, but it's still a "dumb token" that can be intercepted.

tl;dr

2fa:

  Client request 1: "Gimme $5."
  Bank reply 1:     "Who are you?"
  <man-in-the-middle starts listening>
  Client request 2: "StrawberryNewtonManicDresser"
  Bank reply 2:     "Okay, you can now use session ID 1234 to request more money."
  
  MITM request 1:   "Gimme $100000."
  Bank reply 1:     "Who are you?"
  MITM request 2:   "Session id 1234."
  Bank reply 2:     "Okay, here's your money."
client certs:

  Client request 1: "Gimme $5."
  Bank reply 1:     "Who are you?"
  <man-in-the-middle starts listening>
  Client request 2: <'Gimme my money.' ^ PRIVATE_KEY>
  Bank reply 2:     <verifies CR2 against stored client cert>
  Bank reply 2:     "Okay, you can now use session ID 1234, starting at iteration 2, to request more money."
  
  MITM request 1:   "Gimme $100000."
  Bank reply 1:     "Who are you?"
  MITM request 2:   "Session id 1234, iteration 2."
  Bank reply 2:     <checks MITMR2 against stored client cert, is not valid because iteration 2 wasn't signed with the client private key>
  Bank reply 2:     "You're a faker, get lost."
....at least, I think that's how it works, iirc. The messages are re-signed so a stolen session token doesn't allow replay by an intermediary (the same sort of protection modern TLS has, but for the server's protection, not the client's)

It should be noted that carders, whom normally get their Bank credentials from malware on a user's device, can already inject commands into active valid sessions started by the user, so verifying the user's identity is completely pointless in this case.


That seems right to me. It's the same approach as SSH key-based auth in a nutshell


2-factor authentication does not protect against that.

The victim does not know they are being MITMd and enters the 2FA code.


The 2FA device from my bank (https://nl.wikipedia.org/wiki/Rabo_Scanner) shows what permission is asked: account login, signing a transaction for amount x, etc.. You might MITM it, but it would be hard to profit, because the only thing feasible seems to deflect some transaction to another account, and it would only work once and raise suspicion quickly thereafter.

The bank could encode the permission (amount, beneficiary, read access, etc with an expiration date) given into an OAuth bearer token, and the app can use the token to do exactly the things that the user consented to.


I'm amazed (in a good way) that European banks have such advanced security and that the general public goes along with it.

If they tried this at a Canadian bank, every non-technical person would immediately switch to a competitor and they'd lose more money than they'd save via fraud prevention.


I visited the homepage (https://www.teller.io/) and got a warning about the SSL cert being invalid. Kind of ironic. :)


https://teller.io seems fine though. Still sloppy.


> I visited the homepage (https://www.teller.io/) and got a warning about the SSL cert being invalid. Kind of ironic. :)

The correct URL is https://teller.io and then you wont get an SSL cert warning. Not everyone uses "www". Nowhere on teller.io do you see a link to www. You put garbage in and got garbage out.


many if not most people input the www subdomain by rote. unless teller does not care about that category of people, it should probably fix the issue


Of course they should. Redirect traffic from www.teller.io to teller.io


Or, correctly teller.io to www.teller.io. Previous discussion https://news.ycombinator.com/item?id=11004396


This is unlikely to work - developers in general can't cope with managing SSL certificates. They won't know what to do with them or handle them securely.

You need full integrity verification, with a secure store and whitebox crypto keys to make such a scheme secure.


I gathered the target group are developers. Devs should be capable of dealing with this if they want higher security.


Even dev's can't cope. Most apps leak credentials severely. You need integratity verification, obfuscation and whitebox crypto to do this sort of thing securely.

All of that is available in the banking world and is often deployed by people like Irdeto (who I work for) and Arxan etc.


Is that why irdeto.com does not use SSL on their site? Because you're not willing to manage SSL certificates?


Wow it doesn't even redirect 443 it just hangs...


This illustrates a question for my I've been wondering for a while - while each developer on a project should have a good idea of security best practice, is it worth it for each to be an expert in security? I've always felt that there should be a member (or team, depending on project scale) for each project who is a "security expert" and can guide decisions for security best practice. So the developers can be aware that they need to tie in an API key at some point, and the security expert can guide the best way to implement that.


> developers in general can't cope with managing SSL certificates

I'd say the same but they've done just fine publishing anything to the App Store, which uses certs everywhere. And it was even worse the first few years.


> I'd say the same but they've done just fine publishing anything to the App Store, which uses certs everywhere.

"Just fine" is a relative term here. It's still a shit show managing them—AFAIK XCode is the only realistic option, which makes me want to remove my eyes with forks.


If you can cope with OAuth you can definitely manage TAuth. The cert and private key are just opaque things you pass to any HTTP client.


I agree - but as you say OAuth also suffers from MITM weaknesses. I'm just not convinced 'plain' client certs solve that as it's very hard to distribute those securely and manage them. I guess it depends where you see these being used, if used Server to Server it's not too bad, but if pushed out mobile devices (as I suspect they will be) they are very likely to leak unless strong app protection is applied.

If you're banking on strong app protection working you really need to be notified of it's state on the server which this won't do, you need to use a securely signed message from the verification/protection libraries on the client.

That can be done by storing this key into a cryptographic whitebox and then linking using it to integrity verification.


This is the first version of TAuth where only server apps are in scope. Work is already underway on the solution for Mobile… Teller will need it soon for upcoming products.


> developers in general can't cope with managing SSL certificates

https://news.ycombinator.com/item?id=11637700


Problem one exists because, apparently, MITM is a problem with TLS because it's possible for bogus certificates to get through? Well... I guess. But then that's a TLS problem. And your entire banking website is served through TLS. So, if it really is an issue, then solving it just for auth is like putting an Abus padlock on a screen door.

Problem two bemoans the bearer token in Oauth 2. Yes, it's not as secure as OAuth 1, but it's also far simpler. But you don't have to use bearer tokens; you are free to use MAC tokens instead. Why reinvent the wheel?


I think my biggest bug here is that as far as I understand this flow, it essentially says that a given certificate that is generated and signed by a third party (Teller in this case) would be expected to bundle this private certificate with the application. Isn't it possible to extract the certificate from the app bundle after the fact? Or am I missing something here...


The premise for adding client certificates is a MITM made possible because careless app developers will disable server certificate validation.

So, how exactly does adding a client certificate solve that problem? If server certificate validation is disabled on the client, the MITM can still accept the client certificate and substitute their own.

The difference is that in this case the attacker will gain access to the API but the client will not, unless they are being actively MITM. If the client tries to access the API outside the MITM their client cert will be rejected as invalid.


Actually no. The certificate must be signed by Teller (or it's rejected) and associated with the same application as the auth token.


Right, so what stops an attacker from getting a client certificate signed by Teller?

I guess I missed something about how the client certificate is being provisioned. I see the video showing a client certificate being downloaded onto a desktop, but that's obviously not the intended UX for actual end-users...?


So I realize now at this stage you are focused on server-to-server only, in which case there's no issue with trying to deploy individual client certs to end-user devices.

Pulling a certificate via the browser is not great assuming we want a highly controlled chain of custody over the private key bytes and that these certs will expire and need to be regularly rotated. But it's not much work to build some command line tool to send a CSR off for signing, that seems reasonable for server-to-server authentication.

I wonder if you'll run into issues with various languages' HTTPS libraries not properly supporting client certificates.

It's nice to think this could all just work with the lower layer taking care of everything, but I also wonder with the shitshow that is TLS if you can even be sure the client cert validation code can really be trusted as much as an application-layer check.


Client certs are still a bit of a pain. There is already an IETF spec in the works, called Token Binding, on how to bind tokens to key pairs that clients maintain, and create on demand.

https://github.com/TokenBinding/Internet-Drafts

http://www.browserauth.net/home

It's already implemented in Chrome.


I thought client certificates were being phased out, didn't Chrome just remove the <keygen> html tag?


A PKCS#10 request is built using PKI.js, all crypto is done by native WebCrypto APIs.


Recently I've heard rumblings that HTTP/2 somehow doesn't support client certificates. Can anyone point me to more information on this issue?

https://news.ycombinator.com/item?id=11556762


You're going to see a lot more of this, especially in the IoT world. PKI is becoming a requirement.


Kind of. Starting from 49, the <keygen> feature needs to be whitelisted per web-site. Client certificates are not anymore imported automatically, only downloaded (user action needed to load into the keystore).


> The EU is forcing all European banks to expose account APIs

I'm so jealous!


Actually oAuth 1.0 is less secure than oAuth 2.0 because it engages in security theater. It doesn't even require https and as a result any man in the middle can eavesdrop on the requests. And if the token is leaked, it's game over.


oAuth 1.0 supports both PLAINTEXT and HMAC-based signature schemes. I assume the article is assuming HMAC-based signatures (the PLAINTEXT option seems to be less well-known). With an HMAC-based signature, the token will not be leaked.

But you're correct that eavesdropping is possible.


Right, the HMAC based approach is what is recommended over the bearer token approach. But you still leak everything else in the request.

Actually, the same logic should be done for cookies. You COULD replace cookies (which are bearer tokens) with signing every request to the server, but then you're just avoiding the REAL solution: https!

Actually the biggest security theater I have seen on the web is httponly cookies to "mitigate XSS". As if the main thing an attacker will do once they inject JS is to send your cookies somewhere. They can just execute anything in the context of your session while they still have it! So by being security theater, httponly cookies are worse than useless. The right way is to prevent XSS by escaping everything properly.


So, it seems like the main concern here is that a client will not validate the SSL certificate, so the SSL layer is now manually added into javascript code using the WebCrypto API to prevent this? I see not validating SSL certificates being a potential problem with something like a REST API, but is it common to disable SSL verification at the browser level where you would need to use javascript to do this?


One of the things about OAuth is that the user needs to check the website url where he is giving his credentials. Amusingly, many mobile apps seems to forget this important bit. The redirect me to a web ui inside the app itself and expect me to enter my password inside the app. I guess they thought this was a better user experience than handing over control to the browser :/


Two things: 1. Why not just add client-side certificates to an OAuth-based API? 2. Client certificates do not prevent an attacker from pretending to the be server.

Let's say your API server followed the standard OAuth 2.0 protocol except required client-side certificates? Would that be as secure as TAuth?

If so, then the OAuth 2.0 option has the advantage of being well-supported by existing libraries and well-understood from a security perspective. It's less likely that a previously-unknown issue with OAuth 2.0 will crop up and force everyone to scramble for a fix.

And while client certificates prevent an attacker from forging client requests (i.e. tricking the API server) an attacker can still trick the client. An attacker capable of MITM'ing server-cert-only HTTPS can also trick TAuth clients into sending it's banking API requests the attacker's servers. It can respond to those requests with whatever it wants.

To summon the activation energy to adopt (or switch to) a new, less-popular protocol, I'd expect more security benefits.


> Most importantly using JWT tokens make it basically impossible for you to experiment with an API using cURL. A major impediment to developer experience.

Why can't a developer do exactly what you did in your second video, which is to save the JWT to a variable, and then use it in the request?

Heck, you could create a quick wrapper "jwt_curl"/"jwt_http" or something that automatically pulled in that variable…

There's two big things about this scheme that leave me confused: how do you know what the correct certificate for the client is? Do you just send it over HTTPS? But then, one of your opening premises is that we don't get TLS verification correct and are open to MitM, so this seems to contradict that, or are we hoping that "that one request won't be MitM'd", like in HSTS? (which seems fine)


How does this compare with SimpleFIN: https://bridge.simplefin.org/info/developer

SimpleFIN seems simple and still secure. But maybe I'm missing something?


It's still a bit unclear to me how a client generates his certificates and somehow links it to his bank account. The demo shows a web-UI generating it, but would a mobile user have to visit the website to fetch a certificate?


By logging into a 3rd-party site using Google+, for instance, you remain logged-in to Google when you go to any other web site.

And the authenticator clearly does not require this global behavior: if you immediately log out from a Google page, you remain “logged in” at the 3rd-party site that you started from. So why doesn’t it log you out globally? Probably to convenience Google, at the expense of security when you auto-identify yourself to who knows how many other web sites before you realize what happened.

Logging into one page with one set of permissions should mean “LOG INTO THIS PAGE”, not “REVEAL MY SECRETS TO THE INTERNET”.


Let me see if I understand this correctly:

1) Problem: app authors disable TLS (server) cert validation.

2) Solution: give each app author the responsibility of managing and distributing a client side certificate.

Sounds like now you have two problems? In particular, you now have to make sure that every lost/compromised certificate is added to your growing CRL? And you need app developers that demonstrably do not even have the vaguest idea how public key cryptography can be used for authentication to take responsibility for doing this? And there's still no guarantee that they won't disable certificate verification?

Did I miss anything?


At Qbix we developed a much more secure way than oAuth to instantly personalize a person's session -- and even connect the user to all their friends -- while maintaining privacy of the user and preventing tracking across domains by everyone except those they choose to tell "I am X on Y network" ... it also restores the social graph automatically on every network and tells you when your friends joined one of your networks.


>The EU is forcing all European banks to expose account APIs with PSD II by end of 2017.

Any reference for this? The text of PSD II is here — http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:320... — but it's too long and it isn't clear to me whether it is actually ratified.


What bothers me about OAuth is the way you're on one website and are then asked with a pop-up to enter your Gmail or Facebook etc. password as a normal part of the flow. Users aren't savvy enough to check the URL or understand what's going on here so getting them used to this flow is asking for phishing by the look of it. Something that forced two factor authentication would be good.


It's pretty strange to see a new authentication protocol (they describe it as authorization protocol, but they do authentication as well), just as W3C's WebID-TLS is being finalised. Oh, did I mention it uses client X.509 certificates as well? And how does the author imagine that banks would rely on his new protocol to ensure non-repudiation?


> The most realistic threat is the client developer not properly verifying the server certificate, i.e. was it ultimately signed by a trusted certificate authority?

From an attackers point of view, this sounds like a very tiny ray of hope. It sounds like a cool feature/vulnerability that will probably be going away soon because it is so easy to fix.


The problem with the, "was it signed by a trusted authority?" concept is that you generally can't automated the 3rd party since they're not under your control. Also, they typically charge every time you request a new certificate (even if client-only).

The solution to that is to run your own CA but then it won't be 3rd party anymore. It's sort of the catch-22 with SSL/TLS: Either you use a 3rd party or you get to automate things. There doesn't appear to be any middle ground.

Why is there no middle ground? Because if the 3rd party CA is doing their job they're investigating every single request for a new certificate. That means you can't just get a new client-side certificate on demand, instantaneously whenever the need arises.


"client developer not properly verifying the server certificate" makes it sound possible, but I think I understand the problem now maybe.

The 3rd party CA may have issued a cert to malicious party that issued another cert to their man in the middle.

You can't be sure unless you are your own CA, but then you aren't a 3rd party anymore.


> Either you use a 3rd party or you get to automate things. There doesn't appear to be any middle ground.

Have you seen Let's Encrypt?

https://letsencrypt.org/


I didn't see anything about renegotiation. If clients present their certificates during first handshake, it will lead to security concerns. Attackers could observe client's certificates (extract meta-data, de-ano clients ...). If renegotiation is used it will drastically reduce "Bonus DDOS mitigation"


tl;dr: it forces client to have a certificate so that the server can verify.

This is kind of a pet peeve. Anyone who ignores or wants to disable server certificate verification has to understand the risk.


How is this better than Hawk and Oz by OAuth's creator, Eran Hammer? TAuth seems to solve fewer problems, as it cannot be used by public clients.


It's kinda crazy that it has taken so long for someone to actually take an initiative and attempt to make the authentication more secure.

I wonder if this is a custom built solution or if Teller.io is using something like HashiCorps Vault to do the whole SSL cert dance.

Either way, this looks promising.


> It's kinda crazy that it has taken so long for someone to actually take an initiative and attempt to make the authentication more secure.

Not when you consider we've all been subjected to decades of "don't write your own security!!!"


Author here. This is not a new invention. This is standard TLS, and some newer things like WebCrypto brought together.


Has this been tested against a broad user base? It seems rather involved.


Relying on SMS for bank security has always seemed crazy to me. It's not secure. Didn't Telegram creator just got hacked by the Russian mobile provider that sent an SMS to itself?


A lot of banks do use proper hardware tokens (TOTP and similar) for all transactions though.

I am under the impression that we are now in a phase where security needs to be stepped up, but in the mean time tokens send via SMS are considered 'good enough'. There are lots of initiatives for the next step, each providing proper two-factor authentication, but a lot of services are waiting it out because the hardware tokens or smartcards you need for each user cost money, and if you adopt one of the current solutions such as TOTP tokens, users would need such a device for each service they use (again, for banks this is already accepted; at least in the Netherlands).

Ideally, a standard such as FIDO U2F gains ground, so users can safely and conveniently reuse a single hardware token for any service supporting that standard. Who knows, perhaps having your 'internet key' on you can become as commonly accepted as having your house keys on you.

Also, relying on SMS means all these services have a single unique number to identify you with across services. I dislike the privacy implications this entails, and prefer to keep (some of) my on-line identities neatly quarantined the rest. FIDO U2F addresses this problem; even if you use the same hardware key for every service you use, they cannot be linked.


> Ideally, a standard such as FIDO U2F gains ground, so users can safely and conveniently reuse a single hardware token for any service supporting that standard. Who knows, perhaps having your 'internet key' on you can become as commonly accepted as having your house keys on you.

Unfortunately, most FIDO U2F services allow SMS as a fallback authentication method, including Google and Github. At least Github has some strong warnings about it.


That is to be expected at this time though, and for a service like Github letting the user choose their level of authentication strength is fine — you are mostly responsible for what data you store there yourself. In the mean time it will help the adoption of this standard to at least have the option available.

If a service is actually guarding private data by definition (like a bank or an insurance agency) than phasing out SMS in favour of FIDO U2F or another true hardware factor is a much more likely scenario.


And they're sending the SMS auth token before the password is validated, which opens them up to either spamming a phone's text msgs or Denial of Service if they (or the carrier) impose rate limiting.

TOTP should always be used before SMS auth, and SMS auth should always be used in addition to an offline secret (separate from a password). It's just too easy to abuse the unencrypted, open-network nature of SMS.


Must be British... "authorisation" ?


It uses a UK phone number and supports an EU banking initiative, so I would guess so. Why is that relevant?


authorisation is spelled authorization, atleast in the US


It's spelled authorisation everywhere else.



Can someone just blacklist every post that is nothing but a link to this comic.


Can someone just blacklist every post that is nothing but a new standard trying to add to the list of crappy pre-existing standards?

Also, you forgot the question mark on the end of your sentence there. Unless you meant a sarcasm mark or an interrobang and the comment parser stripped it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: