Hacker News new | past | comments | ask | show | jobs | submit | more ratorx's comments login

> if I want to host a website …

The fundamental problem is a question of trust. There’s three ways:

* Well known validation authority (the public TLS model)

* TOFU (the default SSH model)

* Pre-distribute your public keys (the self-signed certificate model)

Are there any alternatives?

If your requirement is that you don’t want to trust a third party, then don’t. You can use self-signed certificates and become your own root of trust. But I think expecting the average user to manually curate their roots of trust is a clearly terrible security UX.


> Are there any alternatives?

The obvious alternative would be a model where domain validated certificates are issued by the registrar and the registrar only. Certificates should reflect domain ownership as that is the way they are used (mostly).

There is a risk that Let's Encrypt and other "good enough" solutions takes us further from that. There are also many actors with economic interest in the established model, both in the PKI business and consultants where law enforcement are important customers.


How would you validate whether a certificate was signed by a registrar or not?

If the answer is to walk down the DNS tree, then you have basically arrived at DNSSEC/DANE. However I don’t know enough about it to say why it is not more widely used.


How do you validate any certificate? You'd have to trust the registrar, presumably like you trust any one CA today. The web browsers do a decent job keeping up to date with this and new top domains aren't added on a daily basis anyway.

Utilizing DNS, whois, or a purpose built protocol directly would alleviate the problem altogether but should probably be done by way of an updated TLS specification.

Any realistic migration should probably exist alongside the public CA model for a very long time.


A recent thread going into details of why (only a tiny fraction of zones are signed, in North America that count has gone sharply down over recent intervals, and browsers don't support it):

https://news.ycombinator.com/item?id=41916478


There is web of trust, where you trust people that are trusted by your friends.

There's issues with it, but it is an alternative model, and I could see it being made to work.


Ah, I forgot about that and never really considered it because GPG is so annoying to use, but it is fairly reasonable.

I don’t see how it has too many advantages (for the internet) over creating your own CA. If you have a mutually trusted group of people, then they can all share the private key and sign whatever they trust.

I think the main problem is that it doesn’t scale. If party A and party B who have never communicated before want to communicate securely (let’s say from completely different countries), there’s no way they would be able to without a bridge. With central TLS, despite the downsides, that is seamless.


Providing initial trust via hyperlinks could be interesting.


I think the context is important in the question. I’d argue that a system design question is optimally answered differently depending on the size of the company and the scope of the internal tech stack.

Doing more of the design up front is important in a big tech environment almost all the time because:

a) many different possible infrastructure (eg. databases) are already at your fingertips, so the operational cost to using a new one is a lot lower. Also, cross-organisation standardisation is usually more important than this marginal extra local operational cost.

b) scale makes a big difference. Usually good system design questions are explicit about the scale. It is a bit different to systems where the growth might be more organic.

c) iteration is probably way more expensive for production facing systems than in a smaller company, because everything is behind gradual rollouts (both manual and automated) and migration can be pretty involved. E.g. changing anything at all will take 1 week minimum and usually more. That’s not to say there is no iteration, but it is usually a result of an external stimulus (scale change, new feature, unmodelled dependency etc), rather than iteration towards an MVP.

Now, a lot of this is pretty implicit and it is hard to understand the environment unless you’ve worked in a different company of the same scale. But just wanted to point out that there is a reason that it is the way it is.

Source: SRE at Google


This only somewhat matches my experience as a SWE at Google. (Though I can see how things may well look different from SRE!)

In my view, there was both too much time spent doing big designs with many of the details eventually becoming stale during implementation, when the real goal was to lock down some key decision points, and also a lot of iterative design and execution actually happening, with less fanfare.

Another thing I saw is that there are many different kinds of projects at big companies, which sit at different points on the continuum between detailed up front design being necessary and useful vs unnecessary friction.

I guess it's a YMMV situation!


Definitely, there is a lot of iterative design as well - but in my experience usually when changing an existing system rather than building a new one, which is what system design focuses on.

I agree about the continuum of projects as well, but again I’d say most of the system design questions I’ve seen, if they were real projects they’d be the up-front design kind, but this is not necessarily the way all projects go.


> collaboration requires trust and common cause

Much more fundamentally collaboration requires communication. In a hierarchy, a manager is a fan-in/out point. It is essential that a manager knows what their team members are doing to effectively perform their role.

Trust and common cause are also essential for effective collaboration. However, I’d argue that regardless of whether these are present or not, the statement in question has to be true.


This is mostly a solved problem in regular compilers, and sourcemaps etc do currently exist for JS.

I agree that the tooling/UI around this could be better, but by focusing on this approach, things like Typescript get better as well.


Are there debuggers that can single step over the transpiled bits so that it feels like the methods are implemented natively? Otherwise, it becomes a mess.


I’m not sure if it exists, but it definitely seems doable (a regular debugger has to map instructions to lines of code).

If the browser starts treating JS as assembly, then there would probably be a greater onus for features like this.


That would be nice. Was stepping through some modern react code, and the amount of cruft you see is terrible in the transpiled result.


> sourcemaps etc do currently exist for JS.

Are those being supplied with every website you use?


I think explicitly stating what it doesn’t guarantee is the right thing to do. Otherwise, the API becomes tied to your implementation through implicit details, which can prevent future generic performance improvements (e.g. unordered_map pointer stability in C++ prevents the implementation from being changed to a different representation like absl::flat_hash_map, even though that’s a guarantee that most people don’t care about).

Re: performance considerations. This is important, but for a performance critical application, any compiler, library etc version change can cause regressions, so it seems better to benchmark often and then tackle this, rather than make assumptions based on implicit (or even explicit) guarantees.


Having read through that thread, most of the (top) comments are somewhat related to the lacking performance of the UDP/QUIC stack and thoughts on the meaningfulness of the speeds in the test. There is a single comment suggesting HTTP/2 was rushed (because server push was later deprecated).

QUIC is also acknowledged as being quite different from the Google version, and incorporating input from many different people.

Could you expand more on why this seems like evidence that Google unilaterally dictating bad standards? None of the changes in protocol seem objectively wrong (except possibly Server Push).

Disclaimer: Work at Google on networking, but unrelated to QUIC and other protocol level stuff.


> Could you expand more on why this seems like evidence that Google unilaterally dictating bad standards?

I guess I'm just generally disgusted in the way Google is poisoning the web in the worst way possible: By pushing ever more complex standards. Imagine the complexity of the web stack in 2050 if we continue to let Google run things. It's Microsoft's old embrace-extend-and-extinguish scheme taken to the next level.

In short: it's not you, it's your manager's manager's manager's manager's strategy that is messed up.


This is making a pretty big assumption that the web is perfectly fine the way it is and never needs to change.

In reality, there are perfectly valid reasons that motivate QUIC and HTTP/2 and I don’t think there is a reasonable argument that they are objectively bad. Now, for your personal use case, it might not be worth it, but that’s a different argument. The standards are built for the majority.

All systems have tradeoffs. Increased complexity is undesirable, but whether it is bad or not depends on the benefits. Just blanket making a statement that increasing complexity is bad, and the runaway effects of that in 2050 would be worse does not seem particularly useful.


Nothing is perfect. But gigantic big bang changes (like from HTTP 1.1 to 2.0) enforced by a browser mono culture and a dominant company with several thousands of individually well-meaning Chromium software engineers like yourself - yeah, pretty sure that's bad.


Except that HTTP/1.1 to HTTP/2 was not a big bang change on the ecosystem level. No server or browser was forced to implement HTTP/2 to remain interoperable[0]. I bet you can't point any of this "enforcement" you claim happened. If other browser implemented HTTP/2, it was because they thought that the benefits of H2 outweighed any downsides.

[0] There are non-browser protocols that are based on H2 only, but since your complaint was explicitly about browsers, I know that's not what you had in mind.


You are missing the entire point: Complexity.

It's not your fault, in case you were working on this. It was likely the result a strategy thing being decided at Google/Alphabet exec level.

Several thousand very competent C++ software engineers don't come cheap.


I mean, the reason I was discussing those specific aspects is that you're the one brought them up. You made the claim about how HTTP/2 was a "big bang" change. You're the one who made the claim that HTTP/2 was enforced on the ecosystem by Google.

And it seems that you can't support either of those claims in any way. In fact, you're just pretending that you never made those comments at all, and have once again pivoted to a new grievance.

But the new grievance is equally nonsensical. HTTP/2 is not particularly complex, and nobody on either the server or browser side was forced to implement it. Only those who thought the minimal complexity was worth it needed to do it. Everyone else remained fully interoperable.

I'm not entirely sure where you're coming from here, to be honest. Like, is your belief that there are no possible tradeoffs here? Nothing can ever justify even such minor amounts of complexity, no matter how large the benefits are? Or do you accept that there are tradeoffs, and are "just" disagree with every developer who made a different call on this when choosing whether to support HTTP/2 in their (non-Google) browser or server?


Edit: this whole comment is incorrect. I was really thinking about HTTP 3.0, not 2.0.

HTTP/2 is not "particularly complex?" Come on! Do remember where we started.

> I'm not entirely sure where you're coming from here, to be honest. Like, is your belief that there are no possible tradeoffs here? Nothing can ever justify even such minor amounts of complexity, no matter how large the benefits are? Or do you accept that there are tradeoffs, and are "just" disagree with every developer who made a different call on this when choosing whether to support HTTP/2 in their (non-Google) browser or server?

"Such minor amounts of complexity". Ahem.

I believe there are tradeoffs. I don't believe that HTTP/2 met that tradeoff between complexity vs benefit. I do believe it benefitted Google.


"We" started from you making outlandish claims about HTTP/2 and immediately pivoting to a new complaint when rebutted rather than admit you were wrong.

Yes, HTTP/2 is not really complex as far as these things go. You just keep making that assertion as if it was self-evident, but it isn't. Like, can you maybe just name the parts you think are unnecessary complex? And then we can discuss just how complex they really are, and what the benefits are.

(Like, sure, having header compression is more complicated than not having it. But it's also an amazingly beneficial tradeoff, so it can't be what you had in mind.)

> I believe there are tradeoffs. I don't believe that HTTP/2 met that tradeoff between complexity vs benefit.

So why did Firefox implement it? Safari? Basically all the production level web servers? Google didn't force them to do it. The developers of all of that software had agency, evaluated the tradeoffs, and decided it was worth implementing. What makes you a better judge of the tradoffs than all of these non-Google entities?


Yeah, sorry, I mixed up 2.0 (the one that still uses TCP) with 3.0. Sorry for wasting your time.


> It's Microsoft's old embrace-extend-and-extinguish scheme taken to the next level.

It literally is not.


Because?

Edit: I'm not the first person to make this comparison. Witness the Chrome section in this article:

https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


Well it may be possible to make the comparison in other things google does (they have done a lot of things) it makes no sense for quic/http3.

What are they extending in this analogy? Http3 is not an extension of http. What are they extinguishing? There is no plan to get rid of http1/2, since you still need it in lots of networks that dont allow udp.

Additionally, its an open standard, with an rfc, and multiple competing implementations (including firefox and i believe experimental in safari). The entire point of embrace, extend, extinguish is that the extension is not well specified making it dufficult for competitors to implement. That is simply not what is happening here.


What I meant with Microsoft's Embrace, extend, and extinguish (EEE) scheme taken to the next level is what Google has done to the web via Chromium:

They have several thousand C++ browser engineers (and as many web standards people as they could get their hands on, early on). Combined with a dominant browser market share, this has let them dominate browser standards, and even internet protocols. They have abused this dominant position to eliminate all competitors except Apple and (so far) Mozilla. It's quite clever.


> They have abused this dominant position to eliminate all competitors except Apple and (so far) Mozilla.

But that's like all of them. Except edge but that was mostly dead before chrome came on the scene.

It seems like you are using embrace, extend, extinguish to just mean, "be succesful", but that's not what the term means. Being a market leader is not the same thing as embrace, extend, extinguish. Neither is putting competition out of business.


> What I meant with Microsoft's Embrace, extend, and extinguish (EEE) scheme taken to the next level is what Google has done to the web via Chromium

I think this argument is reasonable, but QUIC isn't part of the problem.


Microsoft just did shit, whatever they wanted. Google has worked with all the w3c committees and other browsers with tireless commitment to participation, with endless review.

It's such a tired sad trope of people disaffected with the web because they can't implement it by themselves easily. I'm so exhausted by this anti-progress terrorism; the world's shared hypermedia should be rich and capable.

We also see lots of strong progress these days from newcomers like Ladybird, and Servo seems gearing up to be more browser like.


Yes, Google found the loophole: brute-force standards complexity by hiring thousands of very competent engineers eager to leave their mark on the web and eager to get promoted. The only thing they needed was lots of money, and they had just that.

I think my message here is only hard to understand if your salary (or personal worth etc) depends on not understanding it. It's really not that complex.


> I think my message here is only hard to understand if your salary (or personal worth etc) depends on not understanding it. It's really not that complex.

Just because someone disagrees with you, doesn't mean they don't understand you.

However, if you think google is making standards unneccessarily complex, you should read some of the standards from the 2000s (e.g. SAML).


> Just because someone disagrees with you, doesn't mean they don't understand you.

This is generally true of course, but here the complete non-engagement with parent's arguments shows either bad faith or actual lack of understanding. It's more likely to be the former, as the concept is not that difficult to grasp, and quite widely accepted. Heck, even the wikipedia page on EEE has chromium as an example.


Contributing to an open standard seems to be the opposite of the classic example.

Assume that change X for the web is positive overall. Currently Google’s strategy is to implement in Chrome and collect data on usefulness, then propose a standard and have other people contribute to it.

That approach seems pretty optimal. How else would you do it?


[flagged]


How does this have any relevance to my comment?


How does your comment have any relevance to what we are discussing throughout this thread?


This is one of those HN buzzword medley comments that has only rant, no substance.

- MS embrace extend extinguish

- Google is making the world complex

- Nth level manager is messed up

None of the above was connected to deliver a clear point, just thrusted into the comment to sound profound.


It depends on whether it’s meaningfully slower. QUIC is pretty optimized for standard web traffic, and more specifically for high-latency networks. Most websites also don’t send enough data for throughput to be a significant issue.

I’m not sure whether it’s possible, but could you theoretically offload large file downloads to HTTP/2 to get best of both worlds?


> could you theoretically offload large file downloads to HTTP/2

Yes, you can! You’d have your websites on servers that support HTTP/3 and your large files on HTTP/2 servers, similar to how people put certain files on CDNs. It might well be a great solution!


High-latency networks are going away, too, with Cloudflare eating the web alive and all the other major clouds adding PoPs like crazy.


Can you not just set up a new passkey using a different provider (eg. Bitwarden)? It is a bit inconvenient, since it has to be done manually for every site.


A bit is understatement. If this passkeys become widespread, we are talking about like 100 sites.


I have over 500 logins in my Bitwarden account right now. Many of those are not important in the least, but hundreds of them are.


If you go for passkeys, start putting them in your own provider from the start? Vaultwarden is a nice option.


AFAIK, you can register your passkeys using your own provider (eg. Bitwarden). I’ve not personally used it too much, but the option is there.

The remaining issue is moving the credentials between providers, which is an annoying limitation. But you can always add a different passkey to the site using the provider you want, so although annoying it is not the end of the world…

The original limitation is similar to the usability of actual physical security keys, which (depending on the setup mode) are deliberately designed such that the private key material is not recoverable. Software based keys don’t HAVE to share the same limitation, but it seems more like a missing feature than attributing malice to the creators of the spec.


> AFAIK, you can register your passkeys using your own provider (eg. Bitwarden).

Why should we even need a third-party provider? Imagine needing a third-party "provider" for your own ssh keys.


If you only want first-party, you can presumably implement the spec yourself and do whatever you want with the data?

My example was only to point out that there exist self-hostable passkey providers.


Do you use the same SSH keys on multiple devices? I certainly don't. If you need or wanted to (you don't) you'd need some way to sync them across multiple devices securely.

When I use passkeys on a single device, the "provider" is the OS, same as with my SSH keys.


> Do you use the same SSH keys on multiple devices?

Yes.

> you'd need some way to sync them across multiple devices securely.

I take out my physical keychain and plug in my yubikey. Then, after typing in the password to my yubikey, I can use ssh and pgp until I unplug my yubikey. It is a hell of a lot more secure than storing your ssh keys on disk regardless of whether or not you use a unique key per device. I could lock someone in a room with my computer, my yubikey, and my password, and they still wouldn't be able to copy my ssh key.


pedantic nit: the yubikey is a device so you are arguably using one unique key per device


Haha technically true, but I don't think that was the kind of device they were referring to. Even so, it is possible to use the same key on multiple yubikeys. You generate a PGP key on a secure computer and then load that key onto multiple yubikeys. Then you use gpg as your ssh agent. But this is less secure than using keys generated on-device by the yubikey because your private key exists (hopefully temporarily) as a file on the computer where you generated it.


No this is absolutely what I meant: A passkey and a PGP key function very similarly in this capacity, a passkey for a site can be generated on a yubikey and used across devices in just the same way.


> Do you use the same SSH keys on multiple devices?

Yes. For example, when upgrading or reinstalling a system

> If you need or wanted to (you don't) you'd need some way to sync them across multiple devices securely.

`scp -r`

> the "provider" is the OS, same as with my SSH keys.

And you have full access to ~/.ssh and you can move copy update rename delete them however you like. Without a "Credential Exchange Protocol (CXP) and Credential Exchange Format (CXF)" to move between blackboxes of third-party providers


> And you have full access to ~/.ssh and you can move copy update rename delete them however you like.

I think this comes down to me never having wanted to do a copy or move (I create and maintain new keys when I create new devices) which is exactly the same experience i get with a passkey (and is, generally a more secure experience since my keys cannot be exfiltrated since copying them is implicitly verboten).


I certainly backup my SSH keys. That way if my laptop dies today, I can be up and running tomorrow without anyone else being involved.

How do I ensure I can access my accounts if my phone-containing-passkeys is lost/stolen/dies without backups?


You don't. Same as a physical key for your home, you have backups.

Whether that's having multiple separate keys/devices registered with your accounts or a single key stored in a password manager, you need to have a fallback plan.


> Do you use the same SSH keys on multiple devices?

Assuming you mean client devices, yes, depending on my personal relationship/control of the device. (For servers, the answer is "very yes".)

For example, my personal laptop and desktop may have the same private key, and I will backup/restore that same key onto either of them if they are reinstalled or replaced with new hardware.

However my work laptop gets its own, so that I can more-easily limit what it can access or cancel it in the future.


I think retries are sketchy. Sometimes they really are necessary, but sometimes the API can be designed around at-most-once semantics.

Even with a retry scheme, unless you are willing to retry indefinitely, there is always the problem of missed events to handle. If you need to handle this anyway, it might as well use this assumption to simplify the process and not attempt retries at all.

From a reliability perspective, it is hard to monitor whether the delivery is working if there is a variable event rate. Instead, designing it to have a synchronous endpoint the web hook server can call periodically to “catch up” has better reliability properties. Incidentally, this scheme handles at-most-once semantics pretty well, because the server can periodically catch up on all missed events.


That's a good call: having a "catch me up" API you can poll is a very robust way of dealing with missed events.

Implementing robust retries is a huge favor you can do for your clients though: knowing that they'll get every event delivered eventually provided they return 200 codes for accepted events and error codes for everything else can massively simplify the code they need to write.


Hi there! I'm one of the authors of the pgstream project and found this thread very interesting.

I completely agree, retry handling can become critical when the application relies on all events being eventually delivered. I have created an issue in the repo (https://github.com/xataio/pgstream/issues/67) to add support for notification retries.

Thanks for everyone's feedback, it's been very helpful!


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: