Hacker News new | past | comments | ask | show | jobs | submit login
Pioneers of web cryptography on the future of authentication (ieee.org)
72 points by jeremiahlee on June 8, 2020 | hide | past | favorite | 51 comments



When I founded Gliph, which ultimately became focused on privacy, secure message and bitcoin, I was actually after identity. I even tried pitching it as an "identity management platform."

Identity felt like hot idea back then ~2011, this concept of owning your own. There was another company that raised a lot of money that was working on it, Personal.com.

I was able to meet with Larry Drebes at the time who was involved early with OpenID, and at the time was growing Janrain which was helping big companies handle user federation.

That year, Chris Poole of 4Chan controversially said Facebook and Google had it wrong: "Google and Facebook would have you believe that you are a mirror, that there's one reflection that you have; one idea of self," Poole added. "What you see in that mirror is what everybody else sees. But in fact, we're more like diamonds: you can look at people from any angle and see something totally different." [0]

Anyhow, a decade has passed and it feels like very little has happened with identity, the ability to control identity away from centralized structures.

My advice to any of them would be that "identity" is not a product. Authentication and federation to specific systems, useful features (i.e. sharing photos of grandkids), these can be made into products.

[0] https://mashable.com/2011/10/18/chris-poole-4chan-web-2/


What are your thoughts on https://3box.io ? It's built on Ethereum, but sounds like it might be similar to the vision for Gliph.


From my personal viewpoint, I think this is a cool idea and I like the ethos that it seems to come from.

However, anything that does not delight a user is a mistake to build for a startup.

"Reducing the risk" and "building trust" with users means nothing to users when there is no useful product. So instead of integrating this and trying to use this choice to market your product, you should be building a better product and marketing that.

Because this won't sell any product.

Only a fraction of a fraction of people care where data is stored.

So if you have VC capital you should not be spending your time integrating stuff like this you should be building something that people want to use.

Apple is able to play the long game on security and privacy cause they have the cash and market position to do so. It is also the right thing to do, but again they are in a special position to invest in these areas.

See also "choose boring technology" https://news.ycombinator.com/item?id=23444594

Also, this is perhaps not directly related but more context for my parent comment, Larry Drebes' email in 2013 about shutting down Janrain's openID service: https://news.ycombinator.com/item?id=6328182


You are demonstrably right, circa 2011. The example I like is Windows CardSpace[1] which most people reinventing this have never even heard of, which is typical silicon valley. CardSpace had its own problems, but these were surmountable if the market were there.

> That year, Chris Poole of 4Chan controversially said Facebook and Google had it wrong: "Google and Facebook would have you believe that you are a mirror, that there's one reflection that you have; one idea of self,"

I don't know about that. I mean, in that year, Jericho Forum was past the formative stage and was already pushing 'personas'. Google's beyondcorp was based on Jericho Forum principles so, while it doesn't take personas into account, it does have a first-class concept of the trust tier. In my mind, these are related concepts.

But that's internal enterprise-facing.

Externally, consumer-facing as well, Chris Poole was wrong; Google actually did embrace the faceted diamond idea of identity. G+ was launched 6 months before Poole made that statement. G+' primary feature was circles, which are a similar persona/diamond/faceted idea of identity. Not from the authentication POV, of course, but definitely from the identity POV. We all know how well G+ did. Not completely, but a large part of the failure was that people are only good at being a mirror. Circles was a disaster.

> Anyhow, a decade has passed and it feels like very little has happened with identity, the ability to control identity away from centralized structures.

Are you referring to authentication, or identity? Managing multiple identity facets is too burdensome and provides too low utility to most people, and there are exposure risks. The folks that are able and want to manage multiple personas can easily manage it via multiple accounts, can they not? I'd be surprised if people on 4chan use the same username that they do on facebook, for example. They want to keep those completely 100% separate (from observable activity), and are able to do so just fine, thank you very much, within existing centralized structures. I am not sure how a decentralized structure would present an advantage and moreso, an advantage that matters.

A critically important part of the market is market timing. General Magic failed but it wasn't because their product sucked (which it did). It's because the market wasn't ready. But today, we see their legacy everywhere.

So the question is, is the market ready now in 2020? I personally don't think so, but I don't think looking back at 2011 bolsters the argument. The world is very different today. For one thing, you can raise money on a blockchain-based product, regardless of its usefulness. ;) Let's not forget, the point of VC is not to have a successful product, it's to have a successful exit. Your product advice may not be relevant.


Worked on this precise root of trust problem some years ago. Basic issue is, fragmentation in the device marketplace means you can't have an endogenous hardware root of trust on each device. Apple can do this because they own the whole device stack. Providing this outside that closed ecosystem will have hard limits foreseeably. That said, the only tech I am aware of that could conceivably change the balance of this device fragmentation equation is FHE that is sophisticated enough to perform nested standard encryption operations for authentication.

Otherwise, your options are a separate yubikey-like token that you buy and "personalize," with new keys and linking identity attributes to it, or some weak variation of obfuscated code to store and process your root of trust key.

Business wise, the threat model has a catastrophic failure mode, where a compromise likely exposes all users simultaneously, which becomes both worse and more likely as you scale. Then there are the use cases for strong identity proofing and authentication, where they all reduce to some powerful party wanting someone else to adopt sufficiently strong identity to use as recourse against them. Strong identity is for institutions to manage populations interchangeably, and nobody signs up for that, it's regulated down onto them.

The conceptual test I use is, if it interferes with your ability to consume drugs or pornography, it's not a viable consumer product.

Strong identity proofing without commensurately strong anonymity fails that heuristic. Self sovereign identity is only viable if it's either anonymous and disposable, or enforced downwards by a de-facto monopoly.

When I looked at the related startup company's site, I thought, "clearly the VC investment triad decision was based on 'team+market', with 'product' to be figured out later." I'm definitely not smarter than anyone involved there, but the product hurdles in front of them are expensive, political, and complex.


> Otherwise, your options are a separate yubikey-like token that you buy and "personalize"

These "burner phone"-keys are perhaps preferable to many people.

> you can't have an endogenous hardware root of trust on each device

You mean practically transferable? Sadly, a lot of security applications currently link their functionality to specific phone hardware. I think the whole approach to declare your device compromised just doesn't really fit practical applications but has become a pretty common axiom, although I understand the pessimism if we look at OS for mobile devices. Why would I believe trusted computing hardware and not the software in times we change both frequently? Doesn't really ring a bell for me. And beyond that I don't like to be treated as a terrorist by my own machine.

I fully agree that an identity should be at least disposable and certainly as anonymous as it can be. This are the core requirements I would level at a solution.


I may be very dumb but are you suggesting that Fully Homomorphic encryption (FHE) could perform client authentication in the cloud?

Is it possible that I FHE-wrap my own private key, send it to you, and you can sign with my key without the plain text? that sounds crazy? Am I missing something?


The reason I suggested FHE might change this was the a few of the solutions people have tried so far fall into the so-called "white box cryptography," bucket of key based obfuscators on your mobile/endpoint device. Depending on the authentication scheme or security protocol, you need a secure place to store your private key, a shared or derived secret, key encryption keys, or a certificate, etc. We have to assume that the OS of that client device is going to get rooted at some point and the security of those keys is gone - and with it, the identity assurance they provided. (again, the secure element and diversified initialization keys on apple devices is different, think of it as a little client side hsm)

Instead of using WBC, in this hypothetical case FHE becomes the scheme for performing a verification step without yielding information about the key to an attacker with root on the device. It's another handwavey blackbox, but it's a such a change in how we do security protocols that it's worth mentioning as plausibly having consequences for authN. It's a hard problem that gets addressed in areas like authenticated encryption, direct anonymous attestation, and other more use case specific protocols. Someone working in the field today would have more insight into it.


Hmm, I was asking as a few days back the IBM FHE release provoked me to ask what were the use cases for FHE? This seems to be, tantalisingly, the most interesting so far.


If FHE has a practical internal xor operation, the idea of iterating a key and diversification component/counter, e.g. like a FHE implementation of HOTP or OCRA becomes really interesting. Even if it's slow, you could probably stack the slow operations into the initialization/personalization phases.

Security and authN protocols are really just about using crypto to diffuse and distribute risk, so it's conceivable there are use cases where FHE facilitates shifting risk for key security onto the end user, like payment cards, etc.


So, just checking in case I am being dumb, HOTP (like TOTP) works where both parties have shared secret key (common in corporate remote logins).

The value here being that I can make my own secret key, and via FHE send the server a "cloaked" key that they will get the right answer without knowing my secret.

So apart from the initial key sharing, it's down to me to keep the key safe (as you say, pushing risk into the consumer like debit cards)

Feels like it where FIDO goes next.


They give little regard to the actual history of "secure enclave" devices in encryption. Smart cards with private keys generated on-device have been around for decades. This is either the present or the near-past of authentication, not the future.


>>> What is authentication going to look like in 10 years?

>>> Jermoluk: Full sovereign identity. You should be in charge of your own identity.

Yes. I mean this seems so obvious but it is refreshing to see someone, anyone, say it.

Even I have a plan for a gravataar-like service for client certificates, so it's not like this is hard stuff.


As internet users we are provided with a folder or a file full of "certificates" by some item of software, e.g., an operating system, a web browser, etc. In some cases, we have the choice of removing certain certificates if we do not trust them or adding certificates that we choose to trust. People online like to make assertions and argue about a so-called "chain of trust". What if that "chain of trust" began with the user, not a third party? In other words, the user created her own certificate (the root), then selected and signed the "trusted" ones provided to her that she deemed trustworthy? The software would not be able to bypass the user's trust. Just because an "approved" third party signed a certificate does not in and of itself consitute trustworthiness by the user.


I wish you luck explaining this to the average user.


Basically that's called web of trust and was implemented by GPG for decades. It seems that even geeks don't really use that.


The Web of Trust approach can only scale by treating Trust as Transitive.

If your trust is confined to a few dozen co-conspirators it all works fine.

But when you try scale up it breaks badly because human trust is not actually transitive.

PGP tries to fix that by asking you about two things, in the process further worsening the UX. PGP asks Alice if she believes this key is Bob, and then also if she trusts Bob to decide who other people are for her.

Very quickly Alice will discover that unless she agrees to allow multiple such transitive trust steps the "Web of trust" doesn't do much for her. Hopefully she gives up at this point.

But if she keeps using it, then eventually the other shoe drops. Alice sends a message to "Frank" which she intends to keep secret. But unfortunately it doesn't actually go to Frank, the "Frank" identity was vouched for by Edgar. Alice trusted Bob, who trusted Carol, and Carol trusted Dana, and unfortunately Dana is a poor judge of character and trusted Edgar who isn't very reliable.


Human trust is not actually transitive.

Is this like saying human trust cannot be 100% delegated?

If so, that seems to be the model of the third party CA system. As a user I really have little say in "who", e.g., the web browser, should trust. I have (unintentionally) delegated trust to third parties. They decide who I should trust. As a user, I am not supposed to care about or understand this process. This really does not sound like "authentication" to me because I have not authenticated anything. Everything is being handled by third parties.


A conventional PKI ("the third party CA system") separates out this authority. You trust Trent to discern who Bob, Carol, Dana, Edgar and Frank are.

In choosing to engage in conversation with Dana, or even Edgar, you are not obliged to accept their word as to the identity of Frank, that's always Trent's job. And so Edgar's unreliability and Dana's poor character judgement aren't a problem.


What if I already know "Carol"? Should a web browser block me from "conversing with Carol" because I did not get "Trent's" approval first? Seems like I should be the one who decides whether I approve of Carol or not. I am the one taking the risk of "having the conversation".


Maybe they should next interview, Daniel J. Bernstein, Jason A. Donenfeld and Ross Anderson on the recent developments and the future of authenticated encryption.


From this:

    Spectrum: Where do we go next? What is authentication going to look like in 10 years?
    Jermoluk: Full sovereign identity. You should be in charge of your own identity. It is yours after all.
I have a new startup idea "My Identity" (or something like that), which would work like this:

- This needs a mobile app, which stores identity information.

- A website shows it's QR code for registration.

- Users scans the QR code with the mobile app, and checks which information is requested and Approves or Deny.

- When pressing Approve, the identity is sent to the website, which use it for registering the user.

- User should be able to fully control logins from the app, e.g. show a list on logged in websites, force logout.


> the identity is sent to the website

So it's more or less glorified autofill? It should at most give the site a token. Imagine how signing in with Apple, Facebook, or Google works right now but instead of having your identity with those 3, you have it in that app (or somewhere) and it's all your own. Which means you have to replicate on the user side some of the functionality that they offer and package it in a "fit for a user" format. I mean in the end you're holding on to your whole online identity so extra caution is needed.


An app like that exists and it's called Civic Secure Identity. WikiHow allows you to sign up and log in with it.


Not sure an APP is the best device to store local data.


Handshake.org is a another project that’s giving users control of their identity — specifically web identity. It’s an experimental new DNS protocol that uses private keys to determine ownership of domains. This lets users truly own their domains instead of renting names from TLD owners as the existing system requires.

When users have completely sovereign identity they’ll also need some way to share it. I could see Handshake being the solution here (keybase was another viable alternative but they sold to zoom...)



I'm bummed that authentication wasn't discussed until the last 2 paragraphs. And then only topically.

> We figured out how to make a personal certificate authority on your own computer,

This isn't entirely true. I can make a self-signed cert, but there's no root of trust exdcept my computer: someone else can make their own self-signed cert claiming to be me.

I recently bought an SMIME cert from Sectigo for $20. There needs to be a personal CA and commensurate tooling, but it is still not widely available for non-miliatry.

Is there a way for me to get a personal cert that can be authenticated with a CA? Because I've been looking for some time. MozillaZine has a page on SMIME certs, but 80% of the names are crossed off.[1]

> muse of cryptography

Definitely Melpomone. ;)

[1] http://kb.mozillazine.org/Thunderbird_:_FAQs:_Get_an_SMIME_c...


My country (Kazakhstan) issues certificates to its citizens. They are signed by a government CA. So if you want to authenticate a Kazakhstan citizen, you can ask him to sign something and check his certificate. I've heard that many other countries are issuing similar certificates. So it's possible to build an universal service which would work for many countries.


Sounds like a great hammer to hold over any/all citizens. That government can invalidate your cert any time. You won't be able to conduct any action which requires it. I bet they aim for it to be required for everything sooner or later, and merely today it's probably not yet needed to buy milk.


It's not any different from any other government ID.


In the same way a self-signed certificate doesn't need to be signed by an official root of trust to be useful for authentication, a self-signed Personal CA doesn't need to be signed or cross-signed by another CA to be trusted by a server.


I'm a little confused, can you help me understand something?

Assume you created a self-signed personal certificate and you use that to sign your emails.

What if I make a self-signed cert claiming to be you, and create an email address nmelo@gmail.com.

How would someone know which one to trust if there wasn't a third party to verify youre the real nmelo? Websites do this with trusted CA roots on their browsers.

Going back further, business do it with services like Dun & Bradstreet.

But personal?

What am I missing?


Absolutely. The important part is that certificates don't necessarily need to encode any personal information to be immediately useful as a factor of authentication. The fact that a person controls the private key associated with the certificate should be enough to allow any given server to trust the certificate, if they have enough confidence that the private key is being securely stored by the user.

Now extend that to a personal certificate authority. As long as the server is able to trust that the Root Certificate, and any Intermediates certs in that CA are controlled by the user, they should be able to trust certificates signed by that CA to authenticate that person.


Thanks for explaining, would you mind answering a follow-up?

> the fact that a person controls the private key associated with the certificate should be enough

Going back to my example, of you and I both claiming to be the same person with our certificates, us both having a private key doesn't solve this problem. Who authenticates who is the real person? Or is that not the point of certificates?

> if they have enough confidence that the private key is being securely stored by the user

Or... is it that a self-signed cert just proves who owns the private key, and I'm putting to much into what a cert is supposed to be?

> and any Intermediates certs in that CA are controlled by the user,

ah, ok, so I can act as my own CA because I have the private key for the root of trust.


... How many things could you think of in technology that 40 plus years later are still being used in essentially exactly the same form as it was created? It’s not some niche thing that nobody has touched. It’s the heart of the entire way the Internet works ...

That about says it all.


The “one identity key management system to rule them all” seems like a problem in search of a solution.

Why does every app have to use the same key management system?

What’s the problem with identity, exactly?

I already own my identity on every app I use. I know the password (or my password manager does).

Take HN for example. There’s no significant difference between HN asking for a password on a form, and asking a browser API for a certificate. Except the latter is more complex and prone to error.


There is also the question of multiple accounts on the same service for compartmentalization purposes. Right now you can do that in most places, although it might require multiple phone numbers and might violate TOS. Obviously, this could either provide privacy benefits or potential for abuse depending on service.

It's not clear whether the "one identity" system is meant to prevent or empower that. Looking at beyondcorp I'd guess it's meant to prevent.


Sounds like something related to passwords? https://beyondidentity.com/blog/sorry-about-all-passwords


[Meta] god damn it why is scrolljacking still a thing?

( And if they fail at this, why would I trust them with anything else? )

Shame!


meh. this is an ad (after all, it's spectrum) for a new high profile passwordless company. by focusing on the technology (certs) instead of the user-visible benefit (passwordless), they are doing ... something.

if it were someone ordinary, i would write it off as another poorly conceived and poorly run wannabe startup. but given the people involved, instead i find it puzzling.


Not sure if I'm missing something, but have they interviewed two people who were extremely influential in cryptography and a third guy who... once worked at Bell Labs and is the CEO of a startup company?


I don't think you're missing anything. This is effectively a form of name-dropping, except by proxy. "Look, I was interviewed with two titans. Therefore, I am one of them."

Or put another way, a puff piece.


You're right to identify it as odd.

This looks basically like a paid for article you'd see in NY Times Bits section. What's odd about it is that it's on IEEE's site. I don't look at that often, but I suppose I'd hope it wouldn't be sort of shilling startups.


TJ Jermoluk used to be CEO of @Home Networks, President and COO at Silicon Graphics and General Partner at KPCB.


So why is he relevant to "the Future of Authentication"?

I could see Martin Hellman being relevant, maybe. But there's no real substance in this piece, certainly not from Jermoluk. What I got out of it is that PKI is the answer. As I think DNS-based PKI is the answer, I think he's not too far, but he's probably selling something I don't need or want.


What do you mean by DNS-based PKI? That sounds interesting, but I can't quite visualize what that is.


There are two options:

  - registries and registrars run name-constrained CAs

  - DNSSEC/DANE (RFC 6698 https://tools.ietf.org/html/rfc6698)


I’m guessing using CERT resource records, that however doesn’t solve how you establish the chain of trust really.


Because that’s what his new startup Beyond Identity does.


Sounds like a puff piece.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: