That's a really interesting idea, but I worry about what happens when a domain name expires and is re-registered (potentially even maliciously) by someone else.
I think you'd probably need some buy in from the domain registries and ICANN to make it really solid. Ideally, domains would have something similar to public certificate transparency logs where domain expirations would be recorded. I even think it would be reasonable to log registrant changes (legal registrant, not contact info). In both cases, it wouldn't need to include any identifiable info, just a simple expired/ownership changed trigger so others would know they need to revalidate related identities.
I don't know if registries would play ball with something like that, but it would be useful and should probably exist anyway. I would even argue that once a domain rolls through grace, redemption, etc. and gets dropped / re-registered, that should invalidate it as an account recovery method everywhere it's in use.
There's a bit of complexity when it comes to the actual validation because of stuff like that. I think you'd need buy in from at least one large company that could do the actual verification and attest to interested parties via something like OAuth. Think along the lines of "verify your domain by logging in with GitHub" and at GitHub an organization owner that's validated their domain would be allowed to grant OAuth permission to read the verified domain name.
You've already talked about Sigstore (which is an excellent technology for this space), so we can consider developers holding keys that are stored in an append-only log. Then it doesn't matter if the domain expires and someone re-registers it, since they don't have the developer's private keys.
Of course there are going to be complexities involving key-rollover and migrating to a different domain, but a sufficiently intelligent Sigstore client could handle the various messages and cryptographic proofs needed to secure that. The hard part is how to issue a new key if you lose the old one, since that probably requires social vouching and a reputation system.
> Then it doesn't matter if the domain expires and someone re-registers it, since they don't have the developer's private keys.
A principal reason to use sigstore is to get out of the business of handling private keys entirely. It turns a key management problem into an identity problem, the latter being much easier to solve at scale.
> Then it doesn't matter if the domain expires and someone re-registers it, since they don't have the developer's private keys.
That's a good point in terms of invalidation, but a new domain registrant should be able to claim the namespace and start using it.
I think one possible solution to that would be to assume namespaces can have their ownership changed and build something that works with that assumption.
Think along the lines of having 'pypi.org/example.com' be a redirect to an immutable organization; 'pypi.org/abcd1234'. If a new domain owner wants to take over the namespace they won't have access to the existing account and re-validating to take ownership would force them to use a different immutable organization; 'pypi.org/ef567890'.
If you have a package locking system (like NPM), it would lock to the immutable organization and any updates that resolve to a new organization could throw a warning and require explicit approval. If you think of it like an organization lock:
v1:
pypi.org/example.com --> pypi.org/abcd1234
v2:
pypi.org/example.com --> pypi.org/ef123456
If you go from v1 to v2 you know there was an ownership change or, at the very least, an event that you need to investigate.
Losing control of a domain would be recoverable because existing artifacts wouldn't be impacted and you could use the immutable organization to publish the change since that's technically the source of truth for the artifacts. Put another way, the immutable organization has a pointer back the current domain validated namespace:
v1:
pypi.org/abcd1234 --> example.com
v2:
pypi.org/abcd1234 --> example.net
If you go from v1 to v2 you know the owner of the artifacts you want has moved from the domain example.com to example.net. The package manager could give a warning about this and let an artifact consumer approve it, but it's less risky than the change above because the owner of 'abcd1234' hasn't changed and you're already trusting them.
I think that's a reasonably effective way of solving attacks that rely on registering expired domains to take over a namespace and it also makes it fairly trivial for namespace owners to point artifact consumers to a new domain if needed.
Think of the validated domain as more of a vanity pointer than an actual artifact repository. In fact, thinking about it like that, you don't actually need any cooperation or buy in from the domain registries.
> The hard part is how to issue a new key if you lose the old one, since that probably requires social vouching and a reputation system.
It's actually really hard because as you increase the value of a key, I think you decrease the security practices around handling them. For example, some people will simply drop their keys into OneDrive if there's any inconvenience associated with losing them.
I would really like to have something where I can use a key generated on a tamper proof device like a YubiKey and not have to worry about losing it. Ideally, I could register a new key without any friction.
* Dependencies are managed in a similar way to Go - where hashes of installed packages are stored and compared client side. This means that a hijacker could only serve up the valid versions of packages that I’ve already installed.
* This is still a “centralized” model where a certain level of trust is placed in PyPi - a mode of operation where the “fingerprint” of the TLS key is validated would assist here. However it comes with a few constraints.
Of course the above still comes with the caveat that you have to trust pypi. I’m not saying that this is an unreasonable ask. It’s just how it is.
CT: Certificate Transparency logs log creation and revocation events.
The Google/trillian database which supports Google's CT logs uses Merkle trees but stores the records in a centralized data store - meaning there's at least one SPOF Single Point of Failure - which one party has root on and sole backup privileges for.
Keybase, for example, stores their root keys - at least - in a distributed, redundantly-backed-up blockchain that nobody has root on; and key creation and revocation events are publicly logged similarly to now-called "CT logs".
You can link your Keybase identity with your other online identities by proving control by posting a cryptographic proof; thus adding an edge to a WoT Web of Trust.
While you can add DNS record types like CERT, OPENPGPKEY, SSHFP, CAA, RRSIG, NSEC3; DNSSEC and DoH/DoT/DoQ cannot be considered to be universally deployed across all TLDs. Should/do e.g. ACME DNS challenges fail when a TLD doesn't support DNSSEC, or hasn't secured root nameservers to a sufficient baseline, or? DNS is not a trustless system.
EDNS (Ethereum DNS) is a trustless system. Reading EDNS records does not cost EDNS clients any gas/particles/opcodes/ops/money.
Blockcerts is designed to issue any sort of credential, and allow for signing of any RDF graph like JSON-LD.
> Credentials are a part of our daily lives; driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. This specification provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable
> This specification describes mechanisms for ensuring the authenticity and integrity of Verifiable Credentials and similar types of constrained digital documents using cryptography, especially through the use of digital signatures and related mathematical proofs. Cryptographic proofs enable functionality that is useful to implementors of distributed systems. For example, proofs can be used to: Make statements that can be shared without loss of trust,
> Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. A DID refers to any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) as determined by the controller of the DID. In contrast to typical, federated identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities. Specifically, while other parties might be used to help enable the discovery of information related to a DID, the design enables the controller of a DID to prove control over it without requiring permission from any other party. DIDs are URIs that associate a DID subject with a DID document allowing trustable interactions associated with that subject.
> Each DID document can express cryptographic material, verification methods, or services, which provide a set of mechanisms enabling a DID controller to prove control of the DID. Services enable trusted interactions associated with the DID subject. A DID might provide the means to return the DID subject itself, if the DID subject is an information resource such as a data model.
For another example of how Ethereum might be useful for certificate transparency, there's a fascinating paper from 2016 called "EthIKS: Using Ethereum to audit a CONIKS
key transparency log" which is probably way ahead of its time.
- LetsEncrypt Oak is also powered by Google/trillian, which is a trustful centralized database
- e.g. Graph token (GRT) supports Indexing (search) and Curation of datasets
> And what about indexing and search queries at volume, again without replication?
My understanding is that the s
Sigstore folks are now more open to the idea of a trustless DLT? "W3C Verifiable Credentials" is a future-proof standardized way to sign RDF (JSON-LD,) documents with DIDs.