Hacker News new | past | comments | ask | show | jobs | submit login
Decentralized Identifiers (DIDs) v1.0 Becomes a W3C Recommendation (w3.org)
194 points by Tomte on Aug 5, 2022 | hide | past | favorite | 102 comments



I was initially pretty hyped when I read the abstract for DIDs, bookmarked the spec and read it later. The "spec" is a bunch of buzzwords and vague generic "concepts". The DIDs themselves mean basically nothing, it's the "methods" that actually must have their own specification and actually "do something".

Another feeling you can quickly get from DIDs is that they're blockchain centric.

The entire concept is "jack of all trades, master of none". I actually hope to be wrong, and see some more fully fledged implementations/examples of real world use-cases, because I love the idea of federated/decentralized identity.


Both Google and Mozilla objected to this standard because the "method" is left undefined. W3C overruled them. https://www.theregister.com/2022/07/01/w3c_overrules_objecti...


Funny to see Google and Mozilla siding on ethical issues. It really seems like the W3C has finally lost its compass and now wants to venture into the blockchain. Only positive thing is to see Google loosing once in a standards fight, however, I think it might just have been the wrong anarchist endeavour inside the W3C. In the end we will get more centralisation because looks like nobody except the big ones can push standards on a technical decent level (indirectly pushing their agenda).


> Funny to see Google and Mozilla siding on ethical issues

It's not the first time Google and Mozilla sided together against the W3C on web standards, and the last notable time resulted, over time, in the W3C ultimately being displaced from any role in the HTML and DOM standards.

The standards group that implementers listen to (which, for some reason, seems to be the one that listens to implementers, when there are competing options) is the only one that matters, in practice.


> It's not the first time Google and Mozilla sided together against the W3C on web standards, and the last notable time resulted, over time, in the W3C ultimately being displaced from any role in the HTML and DOM standards.

By who?


WHATWG, though I recall that Apple and Opera were equally influential in the move.



> Only positive thing is to see Google loosing once in a standards fight

It is far from the first time. For instance, Google was involved in the WHATWG, which developed the HTML standard while the W3C pushed XHTML.

The standards war, itself, is only lost when nearly nobody uses the standard, which is what happened to XHTML, which lost to WHATWG’s HTML when browsers simply didn’t use XHTML.

It sounds like a lot of cryptocurrency companies wish to prop up this standard, but it is not clear to me that actual people would use it for a non-circular goal.


> browsers simply didn’t use XHTML

Do you mean "developers didn't use XHTML"?

All browsers implement XHTML. It's referred to as "the second concrete syntax for HTML" in the WHATWG spec.[1]

Indeed many websites do use XHTML, the HTML application of XML. However, since proper documents render identically, you won't be aware that you're visiting an XHTML site - that is, unless you check the source.

Fun history side note: Browsers like Netscape and Internet Explorer didn't agree on how to parse HTML in the past. They handled omitability differently, for example, in overlapping hierarchies (<p><b></p></b>). To fix this mess, Sir Tim asked well-respected SGML practicioners to create a clean subset of SGML and define a document type definition (DTD) for HTML. They came up with XML, the clean subset, and XHTML, the DTD. [2]

Basically, XHTML was the first actual standardization of HTML. Unfortunately, minor syntax errors will prevent a XHTML document from rendering, which, to some degree, is probably why it was never widely accepted by developers.

[1] https://html.spec.whatwg.org/multipage/introduction.html#htm...

[2] https://www.youtube.com/watch?v=Q4dYwEyjZcY


> All browsers implement XHTML.

Internet Explorer did not. It completely refused to render XHTML pages served with an XML MIME type (application/xhtml+xml). It would only display pages if they were served with the text/html MIME type, which meant that none of XML’s vaunted features (such as strict parsing) came into play, and such pages were effectively treated as “HTML with syntax errors.”

A big part of why WHATWG was able to dethrone W3C was W3C’s insistence on dropping HTML in favor of XHTML when the overwhelmingly dominant browser of the time had zero support for it.

> They came up with XML, the clean subset, and XHTML, the DTD. … Basically, XHTML was the first actual standardization of HTML.

No, the first formal HTML standard was 2.0 (RFC 1866), which was released in November 1995 and had a DTD that among other things disallowed overlapping hierarchies. XML’s first draft was released a full year later (November 1996), and the first W3C spec was XML 1.0 in 1998. Later that year came the initial drafts for XHTML 1.0, which was a straightforward translation of HTML 4.0 to XML.


In context, the xml serialization of html5 is not what is being referred to.

Although you are right that the issue was more user acceptability and not implementor willingness.


The outcomes of decentralization sound good until you realize it means you’re either running your own server, or using a blockchain and need to protect a private key somehow. But normal humans want nothing to do with either of those responsibilities and always rely on a centralized service.

If this ID standard included a way to use a centrally-controlled email address (the defacto ID standard today that works just fine for most legal activities) or a social login then maybe some of the bigger players would be onboard and it would take hold. As is it seems like it’s just gonna be another crypto fad.


Yes, but the universal ability to create authentic sources of data means people will use what is convenient but always have the option to go to the base layer without permission should they dislike their service.


I don't buy this idea that average people can't manage a keypair. Humans already manage secrets in the form of passwords, it's not that much different.

In the worst-case scenario in which users defer to some weak/centralized system, how is that categoricially worse than the centralized systems we already have?


> Humans already manage secrets in the form of passwords, it's not that much different.

Humans are bad at this which is why we recommend password managers.

That said, I do think keypairs are the way forward, I just also think they need either strong integrated software support in whichever device is being used, or strong external hardware support.

(Yubikeys are nice because they kind of extend the “key” metaphor that people are already used to, but I wish they shipped with a paired backup key that was provisioned with the same key material. Maybe colored red to distinguish it.)


> but I wish they shipped with a paired backup key that was provisioned with the same key material

Two identical keys, is less secure, for those who would otherwise have bought many different keys.

If you instead buy two different keys, then, when you lose the first, you can know it's safe to continue using the second one. And you can block the first one, without locking yourself out.

Maybe getting two different keys would be a good idea


The trouble with this is that you need the second key present each time you need to enroll it to an account, meaning you can’t stash it in a safe deposit box as a backup. And you have to remember to add it to each and every account or it’s not really functional as a backup.

Yes, two different keys are more secure, but they have some pretty severe usability problems.


It is! GitHub suggests this, Gmail requires it. Yubikey has a 2-pack discount that's nearly as cheap as a single key.


Here is a spec for the `did:key` method: https://w3c-ccg.github.io/did-method-key/


Thank you. Skimming this was much more informative.


The standard has grown out of the blockchain space, because they finally offered a way to do decentralized PKI.

Most methods are based on blockchain networks.

But there are some that work without blockchains. Like IOTA, IPFS, p2p, web, etc.


I mean, let's be real here. IOTA is a blockchain in all but name, IPFS is substantially blockchain-adjacent, "p2p" is vague to the point of meaninglessness (and isn't actually a registered method), and "web" is silly (a web site is already identified perfectly well by its URL).


I would classify BitTorrent, GNUtella and other filesharing networks as p2p


My point is that P2P isn't a single thing -- it's a whole multitude of things, many of which don't make sense to create a DID for. Gnutella is a perfect example of where it wouldn't make sense: the Gnutella protocol didn't provide any way to create a persistent reference to a file that was being shared, and it'd make even less sense to tie an identity to such a file.


It's a bit like comparing monkeys and apes though. Yes they're all primates but the family tree does matter and you can't just mash them together.

IPFS resembles many previous attempts at distributed file storage, which did not use blockchain. They had other ways to encourage fairness, which appears to be the primary use of blockchain in IPFS. The existence of the concept of Merkle trees, named or unnamed, lead to blockchain, not the other way around. And it has other children, like some digital signature specs.


IPFS still looking for something useful it can do.


I think the idea is that it (or some future incarnation of it) eventually replaces most static web hosting, FTP, Bittorrent, and quite a bit of dynamic web hosting. Oh, and maybe messaging stuff like IRC.

Sounds like a big piece to chew, but I think the main hurdle is replacing HTTP(S) on the client side.


Location independent stable identifiers for immutable docker images that allow you to cache them wherever you want (including in airgapped envs) and still don't require users to patch your image names with kustomize/helm/whatever


Microsoft has already lined up the tech in their products. So I'm confident that it's just a matter of time before it becomes available in a shop nearby.


> lined up the tech in their products

Link? Or an explanation as to what this means?

> before it becomes available in a shop nearby

Meaning what exactly?

If you're saying Microsoft will implement DIDs, my question is, "Which of the 50+ methods?"


Here's the link: https://www.microsoft.com/en-us/security/business/solutions/...

I haven't used their implementation yet but Microsoft initiated the did:ion method. I guess they'll support it :-D In general, the idea with DID methods is that you can support many methods without too much effort - for example the Universal Resolver implements already a good bunch: https://dev.uniresolver.io/

However, pointing in the direction of the many DID method implementations, I agree with you that they're confusing. Many people try their hands on implementing a new method. Most of the methods will not amount to much. I recommend focusing on simple methods like did:key or did:web to get started and high throughput methods like did:ion, did:elem, did:orb (all sidetree based) for production. did:ethr is also a good starting point for a public blockchain DID method that doesn't require a transaction to create the DID, i.e. no expenses required. did:ethr is also one of the oldest methods and can easily be used in existing Self-Sovereign Identity software solutions.


> https://www.microsoft.com/en-us/security/business/solutions/...

So I went and had a look. There's no specification there that I could see - is there a more specific link I missed?

The white paper was issued in 2018. Is that what there is?

The product is Entra Verified ID - which turns out to be a directory service on Azure. https://docs.microsoft.com/en-us/azure/active-directory/veri...

This appears for all the world like a centralised product marketing itself as "decentralised".


DID resolution is a security operation and has to be done by a trusted component. The document you get back does not have any additional integrity protection on it, so a resolver that lies will basically let the malicious party impersonate anyone.

The resolution process for DID methods also vary in their processing and storage requirements. Some method implementations may result in gigabytes of local data.

For these and other reasons, I don't believe real-world deployments will resolve more methods than they deem necessary. Of course, that would mean that between implementer networks you have far less portability and interoperability for DIDs.


Ok, I'll have to look more in-depth into the Microsoft link, they link to many more pages including a whitepaper.

Regarding all the blockchain centric DID methods, would someone wanting to validate a DID (eg: did:thecoin:whatever_would_go_here), need to hold a copy of the blockchain? (in a scenario where one doesn't want to be dependent on a third party for blockchain interactions).


For most of blockchains you can do light client validation without the full chain (or full node). Light client needs to only know the block headers to validate a truth.

You can get block headers with very lightweight download work from peer-to-peer network.

https://geth.ethereum.org/docs/interface/les


Depends on the implementation and the blockchain, but for many cases there are ways to make such resolutions provably correct, such that you don't have to hold the copy of blockchain and you don't have to trust that a third party did the resolution correctly.


MS has a habit lately of backing lots of standard proposals that fizzle and go nowhere.


see current python

    azure-identity==1.7.1
    azure-digitaltwins-core==1.1.0
    azure-cognitiveservices-vision-face==0.5.0
    azure-cognitiveservices-anomalydetector==0.3.0
    azure-communication-identity==1.0.1


Microsoft always was big on identity with Active Directory. Obviously, they jump on this as soon as possible to call more shots.


You must not be familiar with PKI or the meaning of decentralized if the reliance on blockchain networks is a surprise to you.


> The entire concept is "jack of all trades, master of none" is

The quote is “a jack of all trades is a master of none, but oftentimes better than a master of one.”


No it isn’t.

Wikipedia says “there are no known instances of this second line dated to before the twenty-first century”:

https://en.wikipedia.org/w/index.php?title=Jack_of_all_trade...


So I've just scanned over this stuff, maybe someone can fill in some gaps for me.

There's a list of DID methods "in development" [1]. Is this the list of methods, or is there a centralized registry, or are these just "known" methods?

If there's a centralized registry -- then this isn't really "decentralized" is it? On top of that there's a land-grab that's already begun for the method names, and isn't that going to kill the spec? com, nft, object, web, are already registered by private orgs.

But if it's not centralized, then it's not unique. What stops me from making my own "verifiable registry" [2] for eg `did:nft:internet` which cryptographically proves I own the internet? "Ceramic Network" (the owner of "nft" on the w3c site) says they own it in their registry .. but who's correct?

[1] https://w3c.github.io/did-spec-registries/#did-methods

[2] https://www.w3.org/TR/did-core/#dfn-verifiable-data-registry


It's such a bad spec it'll never be implemented by anyone

In a round about way W3C successfully did the opposite of what they claimed they where trying to do, killing the entire concept and ensuring it won't ever actually happen


That's the list of methods; and yes, there is very much a land grab going on right now.

No, there's nothing stopping you making your own methods. But will anyone actually use it?


So if I’m building a service that lets somebody login with a DID, and I’m using a DID library to verify your authN then that library needs a different code block for every one of those methods?? LOL. What could possibly go wrong? Or less sarcastically, how could this possibly be expected to work?


Yeah DID is a dumpster fire. It's a consulting company's dream spec. Anything is possible but almost nothing is required. It smells a bit like SAML all over again, wherein they try to satisfy every stakeholder and end up satisfying none.


To be fair, SAML was a victory at the time - in that they could even get the parties involved to agree to come to the table and write specs and agree to implement them.

IMHO, such consensus work improved in quality for a while after the WS-* dumpster fire was put out by JSON.


Oh 100% SAML over WS-*. At least SAML can be made to work across vendors. I've never seen anything WS work outside of MSFT products.


"that library needs a different code block for every one of those methods"

Yes, and that is trusted code - even with isolation, an compromise of that method's resolution code would result in malicious parties being able to impersonate anyone else using that method.

There are use cases where you don't want correlation, in which case the decentralized identifier might exist only for you to log into a single web site. At that point, it might be easier to use a method like did:key or did:jwk which encode all of their information into the URL itself, and forego the ability to rotate or revoke keys.


> there is very much a land grab going on right now.

Where and how?

Edit: I just saw the list. It's very land grabby feeling.


Where is the list?


That method registry is simply self-service way to register DID methods, so it functions like the latter -- "known" methods.

There's no editorial/curation aspect to this registry; that's out of scope. The requirements are simply basic DID method conformance -- specify how the create/read/update/write methods are implemented, security considerations, and so on.

The land grab concern you mention is real, but would likely not be addressed at this level (again, it would be considered out of scope), but could happen in a different standards group.

Some relevant work includes defining criteria by which to evaluate DID methods -- i.e., does it support update operations (e.g., did:key doesn't), does it rely on a blockchain and if so, is it permissioned, and numerous other factors. Probably the most comprehensive treatment of these are the DID method Rubric [1]

With Verite, our considerations were mostly around no/low cost, interoperable/open source implementations for the open source implementation (although anyone can use any method they like). The ones we're most likely to add open source implementations for next are did:pkh and did:ion.

[1] https://www.w3.org/TR/did-rubric

[2] https://verite.id/verite/patterns/identifier


Worth recalling that some major concerns have been expressed about this stuff. See discussion a month ago: https://news.ycombinator.com/item?id=31939871



Here is a link to the spec for those who, like me, found weird that it was missing from the announcement: https://www.w3.org/TR/did-core/


Interesting that the announcement includes long lists of "testimonials" from W3C members and from "industry", but there are some glaring omissions. Why no endorsement from Apple, Microsoft, Google, or Mozilla? Even smaller players like Opera and Brave are missing.



Google is not very surprising, because they're probably the largest issuer of centralized identities. DIDs eat their core business.


well google did not outright reject, they said, it's not complete and the final should wait until the methods are implemented and used in practice (like most stuff is done nowadays)


It is confusing because Microsoft works on Decentralized Identity and would almost certainly have wanted to be involved with this.


Microsoft already has their own Decentralized Identity products in place. Check out https://www.microsoft.com/en-us/security/business/solutions/... They are and were also actively involved in the creation of standards and tooling, e.g. the did:ion sidetree-based DID method was created by them.


From the snail DID:

> Write out your DID document according to the data model in [DID-CORE]. Include properties from [DID-CORE], and any other metadata you deem suitable. You MAY type it out and print it onto your paper, you MAY hand write it in pen or pencil or crayon, you MAY use finger painting or cut out and glue small pieces of paper. Express yourself however you like. You SHOULD NOT use glitter or food.


Which did method supports rotation?

  did:key Not Supported
  did:web ???
Do only Proof-of-work methods (e.g. blockchains) support rotation?

  did:ion
Are there no did method based on keybase like tech?

https://www.w3.org/TR/2022/REC-did-core-20220719/#verificati...

  9.7 Verification Method Rotation
  Not all DID methods support verification method rotation.
https://github.com/w3c-ccg/did-method-key/blob/f511ed730f7d2...

  The did:key Method v0.7
  5.1 Key Rotation Not Supported
  This section is non-normative.
https://github.com/w3c-ccg/did-method-web/blob/1b4225ffd9be0...

  ???
https://lists.w3.org/Archives/Public/public-new-work/2021Sep...

   * Proof-of-work methods (e.g. blockchains) are harmful for sustainability
  (s12y).


KERI Supports rotation, check it out, it's pretty nice actually https://identity.foundation/keri/did_methods/


This relies on public-key cryptography? If that is the case, then who is responsible for maintaining the private key? If that responsibility is with the user, then what happens if the user loses the private key? Does the user loses the identity as well? Is there a way to recover that?


There'll surely be custodial services if this takes off.


It depends on the DID method that you're using. The DID method is kind of the identity management system for Decentralized Identifiers. Many DID methods are based on blockchains and with that there are usually no recovery mechanisms for DIDs in place - if you lose your private key the identifier can't be updated and you can't prove ownership over the identifier anymore which in turn renders associated credentials (see Verifiable Credentials https://w3c.github.io/vc-data-model/) unusable.

Custodial services are a good way out. Another option with DIDs is that you can add more than one key to DID. This way you can have one key that is stored away somewhere safe and is only used for recovery purposes.


This reminds me of web service/UDDI registries, that were supposed to come, and all the WS buzz in early 2000s. Some of them came indeed only to be shut down in a few years.


That's exactly the same comparison I've been making regards the W3C VC and DID specifications - we're basically repeating the bad parts of the WS specifications.


Somebody up above mentioned the land grab smell of the DID spec.

So what's different this time? gRPC, protobuf and GraphQL are out there vs. SOAP or CORBA? Some new thing about to rev up? Just plain old loss aversion?

Lambda? We need a FrontPage or Macromedia ColdFusion for that...

I guess that's it, somebody else from the Roblox generation can pick that up.


The difference this time is that there are quite a number of blockchains who are very non-transparently attempting to become rent seekers.

There's a chance for someone to earn $x per ID card or verifiable credential issued; you're essentially becoming a TLD operator only with something 95% of the population of developed countries will use.


Okay, it is not truly decentralised then.


web3auth offers such a service.


There are already some really simple solutions: for example https://keri.one specifies one of the most obvious solutions to the problem. Then proceeds to wrap it up in a lot of web3-inspired gobbledygook.

Have you ever used 2FA systems where you can create various printed "back-up keys". Take that idea then run with it a little and you have KERI.


We're already in use of dIDs on the chia blockchain, and it can be used to verify who issued an NFT to prevent fraud. People have already built games on this and used it to save character profiles (dIDs). Custody solutions are what save you, you can see Bram Cohen talk about it here https://odysee.com/@Chia:d/off-the-chain-Bram-Cohen-2022:5


Here is a link to the Solid [1] DID method: https://solid.github.io/did-method-solid/

[1] https://solidproject.org/


> The Solid DID method specification is a specialisation of the Web DID method [DID-WEB] whereby write (create, update, delete) operations are more tightly specified. The Solid DID method is designed to be generic enough to be compatible with other Web-based systems which implement read and write operations using HTTP methods (RFC 7231). For this reason, we are considering whether to name the DID method did:https or did:rest or similar.

From this paragraph it seems pretty messy that there's both did:web and did:solid. Is the idea that every random company under the sun writes their own slightly different did-url to https mapping? And that every user-agent implements all of them?


There's still a lot to figure out in the space, and the blockchain adjacency always makes things feel like a land grab or first-mover obsessive. I don't think that much matters though. The majority of us will still keep our identity in centralized providers, but the real win is on the overhead of developers and the overall security model of identity on the web as the methods and registries get fleshed out.

Something important to track is the OIDC-SIOP v2 spec [1]. As this gets adopted by libraries and services that people are already using to handle their auth, it becomes effectively easier to "turn on" self-custody of identities for your users. I imagine there will be a lot of different options in terms of methods and registries to choose to accept, and the centralized providers of today will probably have a large say in what methods and registries get accepted.

Ultimately there are a lot of use cases enabled by deferring to the user for their identity and potentially other verifiable claims about themselves. The most obvious use case is phones using their secure elements to actually provide a password-less UX on the web while also allowing developers to skip dealing with user authentication. Less obvious (to most people) are things like verifying you own some NFT, or verifying that you have Bitcoin in some escrow so you're likely not a bot willing to get blacklisted on some platform.

This is the step that's required to create the real land grab over semantic User space - where "JoeSchmoe" really is the one and only.

[1] https://openid.net/specs/openid-connect-self-issued-v2-1_0.h...


The idea behind DIDs and VCs (verifiable credentials, that go along with did's to prove claims about an entity) is fantastic.

Decentralize and normalize global IDs! Have ways to express data about them that contains the proof of ownership of the ID embedded.

The issues are with the execution in my opinion: the spec is too complex, the did methods are not nearly mature enough / constrained enough (some don't even use PKI...), and verifiable credentials / presentations are hard to get going.

For this to take off, they need to overcome a 3 sided market cold start (issuers, holders, controllers), with no clear monetization behind it.

I hope it works but I'm guessing we're not quite there.


I assume you'll have a cryptographic key to state that a DID belongs to you. But what happens if that key get lost/stolen? You lose all you online presence with no way to recover it?


If you use different keys for different purposes, then only one purpose is lost.

Issuers of credentials can ensure that they have an experience date, so you can get a new cred issued with a new key, after the old is lost.

Also, there are services that help to recover your key.


Why not simply a uri with an uuid in it?


You need some public private key stuff to prove you own the uuid. Add in a PGP key and you’re pretty much done I think.


More flexibility.

The did:peer: method, for example, encodes the did document directly in its URI.


Like `data:text/plain;base64,SGVsbG8sIFdvcmxkIQ==`?

I must be too old, I do not understand the interest of this stuff compared to controlling a domain.


The only required property in a DID document is id, so that is the only statement guaranteed to be in a DID document.

DID is little more than list of defined names and their intended usage. It's not automatically bad, but it's just a skeleton without any meat. In some sense, DID is pre-emptive standard template. If new identity protocols write their standard to be convertible to DID, then they end up having systems that can interact (after some testing) if when their methods intersect.


"Whatsmore, DIDs have the unique property of enabling the controller to verify ownership of the DID using cryptography."

Content-based identifiers (CIDS) for IPFS also verify content with cryptography, so not unique. CIDS are also decentralized. However, the following looks promising:

* 4) DID metadata can be discovered (resolvable)."


I wonder if this drives a schism in Google's Chrome implementation of W3C recommendations, given Google's objections to the spec. I feel like they already have shown a willingness to diverge from the specs and do what they feel is best, with a large number of experimental chrome features not included in w3c.


Can someone explain how they are actually implemented? How/where can I create a new DID?


DIDs support multiple storage mechanisms. Each storage mechanism defines how to create, read, write/update, and delete DIDs. Here's a list of the already implemented and published mechanisms: https://w3c.github.io/did-spec-registries/#did-methods. They range from no storage (e.g. did:key, you derive the DID from a public key but can't change/update anything about that DID) to stored on a webserver (e.g. did:web) to stored on a blockchain (e.g. did:ethr).

Often, a DID is created from a public/private key pair that is used to sign a transaction that's specific to the DID method. The DID then becomes publicly visible with the associated configuration, e.g. multiple key pairs associated with the DID, service endpoints that allow an interaction with the DID, etc.


Where does one submit an method implementation?


Look into sidetree protocol, I've seen a lot of people building on top of it.

Link: https://identity.foundation/sidetree/spec/


Is there a place to register a new did method? What does sidetree have to do with W3C sidetree spec?


So, I take it that the "decentralized" aspect of all this stuff really comes into play when all of the server hardware is surgically implanted into our limbs!


No, the cloud systems still need to be there (to track when you are trying to use your identity), but they will rely on private keys that you can surgically implant in your limbs.

In fact a more realistic (but symbolically equivalent) scenario is that you'll be expected to carry around a device with you at all times that is biometrically linked to your limbs, and auto-updates its code (and the EULA that you agreed to in perpetuity).


So if you like something incorrect in Twitter, all your accounts in Google, Amazon and iPhone go dark simultaneously?


I guess I'll be able to drop LDAP when LDID will be introduced.


You will even be able to drop "in" did:ldap: if you want.


In some ways it's surprising that it took us so long to arrive here, but very exciting that things are moving in this direction nonetheless!


That's an awesome genetic sentence that would fit right in https://xkcd.com/1022/


Perfect for this non-spec.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: