Hacker News new | past | comments | ask | show | jobs | submit | 0xbadcafebee's comments login

I hate this, but I'm also glad it's happening, because it will speed up the demise of Web PKI.

CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.

What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.

It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.

That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.

I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.


So it's being pushed because it'll be easier for a few big players in industry. Everybody else suffers.

It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.

There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.

Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.

I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.


It is continuously frustrating to me to see the arrogant dismissiveness which people in charge of such technical groups display towards the real world usage of their systems. It's some classic ivory tower "we know better than you" stuff, and it needs to stop. In the real world, things are messy and don't conform to the tidy ideas that the Chrome team at Google has. But there's nothing forcing them to wake up and face reality, so they keep making things harder and harder for the rest of us in their pursuit of dogmatic goals.

An example of that was the dismissal of privacy concerns by the committee with using MAC addresses in ipv6 addresses.

Aren’t temporary addresses a thing now?

It astounds me that there's no non-invasive local solution to go to my router or whatever other appliances web page without my browser throwing warnings and calling it evil. Truly a fuck up(purposeful or not) by all involved in creating the standards. We need local TLS without the hoops.

Simplest possible, least invasive, most secure thing I can think of: QR code on the router with the CA cert of the router. Open cert manager app on laptop/phone, scan QR code, import CA cert. Comms are now secure (assuming nobody replaced the sticker).

The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).

End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.


This is a terrible solution. Now you require an Internet connection and a (non-abandoned) third party service to configure a LAN device. Not to mention countless industrial devices where operators would typically have no chance to see QR code.

The solution I just mentioned specifically avoids an internet connection or third parties. It's a self-signed cert you add to your computer's CA registry. 100% offline and independent of anything but your own computer and the router. The QR code doesn't require an internet connection. And the first standard I mentioned was designed for industrial devices.

Not only would that set a questionable precedent if users learn to casually add new trust roots, it would also need support for new certificate extensions to limit validity to that device only. It's far from obvious that would be a net gain for Internet security in general.

It might be easier to extend the URL format with support for certificate fingerprints. It would only require support in web browsers, which are updated much faster than operating systems. It could also be made in a backwards compatible way, for example by extending the username syntax. That way old browsers would continue to show the warning and new browsers would accept the self signed URL format in a secure way.


Your router should use acme with a your-slug.network.home (a communal one would be nice, but more realistically some vendor specific domain suffix that you could cname) domain name and then you should access it via that, locally. your router should run ideally splitbrain dns for your network. if you want you can check a box and make everything available globally via dns-sd.

Have you seen the state of typical consumer router firmwares? Security hasn’t been a serious concern for a decade plus.

They only stopped using global default passwords because people were being visibly compromised on the scale of millions at a time.


Good point. There are exceptions though. Eero, for example.

All my personal and professional feelings aside (they are mixed) it would be fascinating to consider a subnet based TLS scheme. Usually I have to bang on doors to manage certs at the load balancer level anyway.

I wonder what this would look like: for things like routers, you could display a private root in something like a QR code in the documentation and then have some kind of protocol for only trusting that root when connecting to the router and have the router continuously rotate the keys it presents.

Yeah, what they'll do is put a QR code on the bottom, and it'll direct you to the app store where they want you to pay them $5 so they can permanently connect to your router and gather data from it. Oh, and they'll let you set up your WiFi password, I guess.

That's their "solution".


Why should your browser trust the router's self-signed certificate? After you verify that it is the correct cert you can configure Firefox or your OS to trust it.

Because local routers by definition control the (proposed?) .internal TLD, while nobody controls the .local mDNS/Zeroconf one, so the router or any local network device should arguably be trusted at the TLS level automatically.

Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.


I’ve actually put a decent amount of thought into this. I envision a raspberry pi sized device, with a simple front panel ui. This serves as your home CA. It bootstraps itself witha generated key and root cert and presents on the network using a self-issued cert signed by the bootstrapped CA. It also shows the root key fingerprint on the front panel. On your computer, you go to its web UI and accept the risk, but you also verify the fingerprint of the cert issuer against what’s displayed on the front panel. Once you do that, you can download and install your newly trusted root. Do this on all your machines that want to trust the CA. There’s your root of trust.

Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.

Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.


This is incredibly convoluted scenario for a use case with near zero chance of a MITM attack. Security ops is cancer.

Please tell me this is satire.

I wonder if a separate CA would be useful for non-public-internet TLS certificates. Imagine a certificate that won't expire for 25 years issued by it.

Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.

In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.

Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)


But what would be the value of such a certificate over a self-signed one? For example, if the ACME Router Corp uses this special CA to issue a certificate for acmerouter.local and then preloads it on all of its routers, it will sooner or later be extracted by someone.

So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.


The value: you open such an URL with a bog standard, just-installed browser, and the browser does not complain about the certificate being suspicious.

The private key of course stays within the device, or anywhere the certificate is generated. The idea is that the CA from which the certificate is derived is already trusted by the browser, in a special way.


Compromise one device, extract the private key, have a "trusted for a very long time" cert that identifies like devices of that type, sneak it into a target network for man in the middle shenanigans.

If someone does that you’ve already been pwned. In reality you limit the CA to be domain scoped. I don’t know why domain-scoped CAs aren’t a thing.

> If someone does that you’ve already been pwned

Then why use encryption at all when your threat model for encrypted communication can't handle a malicious actor on the network?


Because there are various things in HTML and JS that require https.

(Though getting the browser to just assume http to local domains is secure like it already does for http://localhost would solve that)


Yes

Old cruft dying there for decades

That's the reality and that's an issue unrelated to TLS

Running unmanaged compute at home (or elsewhere ..) is the issue here.


It is reasonable for the WebPKI of 2025 to assume that the Internet encompasses the entire scope of its problem.

If the web browsers would adopt DANE, we could bypass CAs and still have TLS.

A domain validated secure key exchange would indeed be a massive step up in security, compared to the mess that is the web PKI. But it wouldn't help with the issue at hand here: home router boostrap. It's hard to give these devices a valid domain name out of the box. Most obvious ways have problems either with security or user friendliness.

Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.

It makes the system more reliable and more secure for everyone.

I think that's a big win.

The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.


> It makes the system more reliable

It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.

And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.


And rather than fix the issues with revocation, its being shuffled off to the users.

Good example of enshittification


It makes systems more reliable and secure for system runners that can leverage automation for whatever reason. For the same reason, it adds a lot of barriers to things like embedded devices, learners, etc. who might not be able to automate TLS checks.

Putting a manually generated cert on an embedded device is inherently insecure, unless you have complete physical control over the device.

And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.

Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.


This is a bad definition of security, I think. But you could come up with variations here that would be good enough for most home network use cases. IMO, being able to control the certificate on the device is a crucial consumer right

Real question: What is the correct way to handle certs on embedded devices? I never thought about it before I read this comment.

There are many embedded devices for which TLS is simply not feasible. For remote sensing, when you are relying on battery power and need to maximise device battery life, then the power budget is critical. Telemetry is the biggest drain on the power budget, so anything that means spending more time with the RF system powered up should be avoided. TLS falls into this category.

Yes, but the question is about devices that can reasonably run TLS.

The answer is local acme with your router issuing certs for your ULA prefix or “home zone” domain.


"Suffer" is a strong word for those of us who've been using things like Let's Encrypt for years now without issue.

Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.

> easier for a few big players in industry

Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).


Seems like a weird pastime. Like recording spelling misttakes. Sniglets are a much more smarter way to spend your time. Rather than just a brain turd, they're a placental ejection of humor and common smarts.

The truest form of hacking was when you could fuck with public utilities using parts you got at Radio Shack. I wish I still had my DTMF dialer, it was so cool. Maybe kids today are buying parts to build GSM base stations. Maybe they'll bring back pirate radio, once all the analog radio bands have been replaced with digital. I just hope they get to experience that thrill and wonder at the power that hidden knowledge brings.

Autocracy, or some form of it, has been the dominant form of governance throughout the history of human civilization. That's not gonna change just because we got Apple watches. Democracy was a really nice experiment, but it's over now.

ironically enough, most of those regimes also fell. Even autocracies realized from millenia of history that its easier to control people when they feel like they have power. Or distract them with circuses.

Turns out Apple watches can change and stabilize such autocracies.


Yeah, for about a century or two. This is not the first time all of this has happened. Read your history.

I first learned about Temu when someone I barely knew messaged me saying they needed me to sign up to Temu under some referral code so they could get bonus points to shop more. They were addicted to shopping, and Temu was the most addictive thing on the internet. Like the Dollar Store, Amazon, slots, and a pyramid scheme, rolled into one.

Hmm, there's supposed to be a Tasks [reminders] feature in ChatGPT, but it's in beta (I don't have access to it). Whenever it gets released, you could make some kind of "router" that connects to different communication methods and connect that up to ChatGPT statefully, and you could just "speak"/type to ChatGPT from anywhere, and it would send you reminders. No need for all the extra logic, cron jobs, or SQLite table (ChatGPT has memory across chats).

Or, y'know, just build a bullet train.

USA. Biggest economy on earth. Most powerful nation. Third largest nation by population. Could maybe build one bullet train, like the 20 other nations that already have them in service, and the 13 other nations that have them in development.

Or just settle for mediocrity. Whatever.


IMO:

The US suffers from the notion of exceptionalism spawned from its position of massive advantage after WWII, as well as a deep seated aversion to mass transit that was borne out of the backlash against desegregation.

I'm probably missing many smaller factors, but I'd be interested to know if someone thinks that I've incorrectly identified those two as major factors.


Not from North America. But I disagree that the exceptionalism started post WWII.

How do you explain the country's ability to perform civil engineering feats prior to WWII. The Erie Canal, Trans-continental Railway, Panama Canal, Brooklyn Bridge, Empire State Building and Golden Gate Bridge spring to mind as feats of engineering that few other country's (if any) could rival. There are obvious examples post WWII (Manhattan project, Apollo program, Interstate highway system), but for all of the USA's pitfalls, they do have an incredible history of civil engineering projects prior to WWII.


The US shifted their focus from domestic to international politics after WWII. They were brought in as arbitrators for world peace, and in a lot of ways, stepped up to the task. Military expansion and spending went through the roof and the Cold War and Vietnam didn't help build public trust in government to do big things at home. Behind the scenes though, politicians could work with other nations to organize the reality that we all live in today in the West. Later, politicians began organising free trade and technology became the next frontier. Why spend hundreds of millions on a bridge when I can send an email instead of a letter? The USA really is a "marvel" in the fact that most of her problems were caused and exacerbated by success and enough competent people in power to keep things moving.

It's hard to discuss the United States without mentioning Trump who believes that undermining the past 100 years of Neoliberalism will bring America back to her "glory days" while completely ignoring the reality on the ground that led from where they were then to where America is today.

So maybe there will be more public works projects in the future for America, but I fear that they will be more focused on appeasing dear leader instead of meaningfully improving the lives of the average American citizen. But until someone turns on the lights and shuts off the music, America will continue to spiral and cry about "unfairness" while her created reality crumbles due to lack of maintenance and care about the subtle realities on the ground that were once central to her rise in the first place.


No, American exceptionalism is not a post-WWII idea. It was core to the expansion of the country in the 19th century.

I just realized that modern web applications are a group form of procrastination. Procrastination is a complex thing. But essentially, it's putting something off because of some perceived pain, even though the thing may be important or even inevitable, and eventually the procrastination leads to negative outcomes.

Web applications were created because people were averse to creating native applications, for fear of the pain involved with creating and distributing native applications. They were so averse to this perceived pain that they've done incredibly complex, even bizarre things, just so they don't have to leave the web browser. WebSockets are one of those things: taking a stateless client-server protocol (HTTP) and literally forcing it to turn into an entirely new protocol (WebSockets) just so people could continue to do things in a web browser that would have been easy in a native application (bidirectional stateful sockets, aka a tcp connection).

I suppose this is a normal human thing. Like how we created cars to essentially have a horseless buggy. Then we created paved roads to make that work easier. Then we built cities around paved roads to keep using the cars. Then we built air-scrubbers into the cars and changed the fuel formula when we realized we were poisoning everyone. Then we built electric cars (again!) to try to keep using the cars without all the internal combustion issues. Then we built self-driving cars because it would be easier than expanding regional or national public transportation.

We keep doing the easy thing, to avoid the thing we know we should be doing. And avoiding it just becomes a bigger pain in the ass.


I agree with a lot of that. But, it's a lot easier to get someone to try your web app than install a native app. It's also easier to get the IT department to allow an enterprise web app than install a native app. Web apps do have some advantages over native apps.

Yes, all of that is the reason why we procrastinate. "Easy" is the excuse we give ourselves to do the things we would otherwise have no justification for, and avoid the difficult things we know would be better. It's not my fault; it's not my responsibility; I shouldn't have to do extra work; it's too complicated; it'll be hard; it'll be long; it'll be painful; it's not perfect; it might fail. No worries; there's something easier I can do.

Thus we see the flaws in the world, and shrug. When someone else does this, we get angry, and indignant. How dare someone leave things like this! Yet when we do it, we don't make a peep.


You left out the part where you explain why native apps are so much better for users and developers than web apps?

I can't tell why you think WebSockets are so bizarre.


Many advantages, for example web apps get suspended if you’re not browsing the tab. But I do agree it’s much more attractive to write web apps mainly for portability.

Native apps also are suspended. Or can be, like na iOS (not being fanboy, I appreciate this mechanism). Also native, desktop apps also can be almost suspended while not used.

Web apps are just way easier to do anything (rarely good), so many people are doing them without real engineering or algo knowledge producing trash every day. Article is also using same voice. Showing one protocol as completely bad, mentioning only the issues both approaches have, but silently omitting those issues describing „the only way, craft, holistic, Rust and WASM based solution, without a plug”


> Native apps also are suspended. Or can be, like na iOS (not being fanboy, I appreciate this mechanism).

On iOS web apps get suspended very aggressively, and there is no way for a web app to signal to the browser to not suspend it. I never developed native mobile apps, but I assume it’s less aggressive for native apps and/or native apps have a way to prevent themselves from being suspended. This doesn’t seem to be an issue on desktop though.


> bidirectional stateful sockets, aka a tcp connection

Which is not "easy" to do over the internet, so the native app folks ended-up using HTTP anyway. (Plus they invented things like SOAP.)


TCP is easy to do over the internet. Did you mean the middleboxes? Ah, the middleboxes, the favorite scapegoat of the world wide web's cabal of committees. You'd think they were absolutely powerless. Like firewalls and application proxies are a fundamental principle of nature; unable to be wrestled, only suffered under. Yet the web controls a market share 500 times larger.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: