"Alternatively, if companies cannot handle the rotation, then they likely should re-evaluate if WebPKI is even appropriate for their use-case."
I hate hearing this awful take, as if every IT organization has the same neat and tidy systems deployed as they do. Never had to deal with 3rd party SaaS vendors certificate pinning requiring service tickets to change, don't have any hardware devices or appliance based software images each with their own web interface to update certs...
Yes companies should have a plan to do their minimum yearly certificate rotates. Yes those companies should have a security plan to rotate affected certificate issues, but in those cases the business users are ok with an outage to remediate a real security issue.
But what happened here is that Digicert invalided the entire domain's worth of certs. All those service.companyname.com certs or duplicates under that domain validation were affected in bulk. In some companies there could be thousands of certs under that domain. Digicert screwed up their system implementation and made their customers suffer.
"It's really disheartening that publicly trusted CAs just ignore their contractual obligations however they see fit."
It's also disheartening to see browsers in the CA consortium ignore the CA resolutions as well. Like how everyone voted for 2 year certs and Apple did their own thing anyways. Any punishment for Apple come? So why pick on the others?
Stuff like this is why some parties have been calling for increasingly-shorter cert validity. When a cert is valid for several years it allows companies to develop an increasingly complex workflow around deploying them, sometimes taking weeks and involving dozens of parties to roll them out. This is in turn used as an excuse by CAs to completely ignore the industry standards.
Those SaaS vendors probably shouldn't be doing cert pinning to begin with. If you don't trust your root store either implement support for CAA or DANE, no need to roll out your own workflow. Those hardware devices should either 1) not use publicly trusted certs, 2) renew their own certs, or 3) have an API to automatically update certs.
The only reason they're still getting away with it is because doing it manually once a year isn't horribly painful. If 90-day validity becomes the industry standard, pain-free certificate renewal turns into a must-have for all new contracts.
> Stuff like this is why some parties have been calling for increasingly-shorter cert validity. When a cert is valid for several years it allows companies to develop an increasingly complex workflow around deploying them, sometimes taking weeks and involving dozens of parties to roll them out. This is in turn used as an excuse by CAs to completely ignore the industry standards.
"several years"? The certs we are getting have one-year lifetimes. It used to be two years, but was reduced to one year some time ago (I don't remember exactly when).
Also, I don't think the problem is cert lifetimes, I think the problem is having so many certs expiring all at the same time. A lot of IT folks are coming off the major pain of the CrowdStrike crash. This is similar: You suddenly have a very large number of certificates that are going to stop working in less than 24 hours, and you have to respond.
Sure, you could say "Well, companies should be resourced to be able to handle that at any point." Except that's not the reality right now.
I think they're suggesting that 1 year certificates are still at the point where people can just manually rotate them as they expire. If you keep reducing the lifespan, to say 90 days, that starts to tip the scale. You'll be spending too much human time manually rotating certificates that it will make financial sense to just automate the process.
If the process is automated then revocation can be automatically handled as well (so long as ARI gains traction).
I work with customers that typically take 3 or 4 days to either acquire or renew a cert. Even though they are on one of the major cloud provider with automated certs, they refuse to use those mechanisms due to policy. They would rather send everything, including private keys, through email. They also take several days, sometimes weeks, to update a DNS entry. Welcome to modern IT.
Trying to deploy SaaS apps for customers it sometimes takes 3-4 weeks to get them to make any DNS changes, then at the last minute they CC us into an email with SquareSpace support for some reason (their DNS is on Cloudflare...)
I think the issue is less with SaaS vendors doing cert pinning and more that many SaaS vendors offering deploying on customer domains often rely on those same customers to make the DNS changes for validation, and whenever you introduce another party like that it's exponentially more difficult to actually get things done in a timely matter.
IMO they should just use HTTP challenges to avoid this whole thing, but it's a pretty common pattern I see with a lot of SaaS vendors, even major fintechs.
That's one option. Alternatively, they could just delegate the _acme-challenge with a CNAME.
If clientportal.somebank.com is actually run by somesaas.com, they can define CNAME _acme-challenge.clientportal.somebank.com --> [some_key].domainvalidations.somesaas.com
When the SaaS vendor needs to request a new cert, they set the appropriate TXT record on [some_key].domainvalidations.somesaas.com.
"Never had to deal with 3rd party SaaS vendors certificate pinning requiring service tickets to change"
I think this tends to fall into "probably shouldn't have been using Web PKI". I can't immediately think of a reason why you'd need a publicly trusted certificate if you're pinning a specific public key.. at that point who cares who signed it?
I do agree that there are real costs with rotating certificates that ultimately may make it impossible for an organization to complete that work in the revocation window. That is very much an area that needs further automation developed and more importantly, for it to actually be adopted. I believe that's what ACME Renewal Information is attempting to address.
"but in those cases the business users are ok with an outage to remediate a real security issue"
Ideally yes, but that might be the same point you find out the certificate was used in some critical system (let's say Air Traffic Control like a previous CA tried to claim). They still may very well not be okay with the revocation despite the security issue. _Those_ are the people that need to stop using these certificates and there's really no way to weed them out until a revocation actually needs to occur.
"Digicert screwed up their system implementation and made their customers suffer."
And those customers are right to be mad at DigiCert. They probably don't have a legal basis to challenge as the subscriber agreement explicitly permits immediate revocation without prior notice, but they can certainly take their business elsewhere.
"It's also disheartening to see browsers in the CA consortium ignore the CA resolutions as well. Like how everyone voted for 2 year certs and Apple did their own thing anyways. Any punishment for Apple come? So why pick on the others?"
Admittedly I'm not very familiar with the various root programs and the obligations they have with CAs, but it doesn't seem unreasonable that root programs would be free to impose stricter requirements then the BRs.
Though I do find it two-faced for Apple to vote for Ballot 193 only to then impose a stricter requirement. At the very least they should have abstained.
"I can't immediately think of a reason why you'd need a publicly trusted certificate if you're pinning a specific public key"
Inter-finance systems mostly, some government. Sometimes they pin the CA issuer, sometimes IP based although with dynamic cloud IPs that is disappearing, sometimes inside a VPN, and other times just the cert issues themselves. Same service handing public users while making bidirectional API calls to other interfaces that are more locked down.
Not everyone is a monolithic copy and paste Wordpress hosting site, a new cloud native cash rich startup, or a massive Google/Amazon/Microsoft with huge teams to orchestrate everything using their own architecture and systems they developed themselves. Private PKI? Even more orchestration layers for enrollment especially in places with BYOD.
There is no point to low expiry certs anyways. If a server is hacked, the primary concern is what data were they able to exfiltrate and for how long - not that a keypair was maybe stolen to be used in a very complicated and unlikely attack to intercept some of the same data they already stole.
Your ATC comment seems to continue your theme that everyone should run a private PKI instead. Airports are full of interconnections between themselves, other airports, airlines, ground crews, satellite relays, and weather monitoring systems. So then all these parties need to do all the same actions as the public PKI - root key signing , cert issue logging, secure interface for issuing certs, develop a trust across all parties and make them install your root in all their systems ..... or, just use the public PKI services which already does that. You are just reinventing the wheel and probably will get it wrong. Maybe for some strictly backend systems, or things like server out of band management it works well, but not anything involving multiple companies.
The CAs work with large and complex business understand these complexes and voted for 2 year duration. The owners of the browsers just wanted to further their own cloud bottom lines.
"Your ATC comment seems to continue your theme that everyone should run a private PKI instead."
Not the OP you replied to, but I want to add some nuance: there's a vast solution space between using the WebPKI and rolling your own. The enterprise focused CAs have non-WebPKI CAs and CA-as-a-service offerings, both with way longer certificate lifetimes and way longer revocation periods.
If you don't need WebPKI-compatible certs (because you're not offering services to the general public) and your org cannot abide by the WebPKI rules requiring 24 hours max before revocation, you are doing something very wrong when you use the WebPKI.
I think part of the issue could be with the naming - 'public PKI'. I'd argue that doesn't really exist anymore - the nomenclature in use for some time now is 'web PKI'.
It's now ostensibly an ecosystem for use by modern, updated clients - browsers and OSs - for TLS. clientAuth will be gone from the webPKI soon, too, I hope.
It's fast becoming a more fluid, shifting ecosystem. We'll be on 90-day leaf certs very soon, shorter after that. Roots and intermediates will have much reduced lifetimes. New guidelines and regulations change things rapidly. Mass revocation events like this one.
In the ATC example - all parts of that ecosystem should be managed to the point that distributing a private root is relatively easy. It shields them from events like this. As another commenter has pointed out - running a private CA (or what might be known as an 'ecosystem CA' like we see in IoT with Matter, airlines with CertiPath, wireless with WinnForum) can be done 'as-a-service' easily, be it from a cloud vendor or CA or similar provider.
If folks continue to use the web PKI for non-web purposes, then they have to be in a position to deal with challenges like short-lifetime certs, 24-hour revoke/reissuance windows, and frequently-updated trust stores.
Most of the agreements and T&Cs for public CAs already forbid use in 'critical' systems anyway, so you're effectively agreeing to these kind of 24-hour changes from the start.
Getting your online account hacked can feel like a punch to the gut. It's unsettling and makes you question your digital safety. But don’t worry! Here’s a roadmap to help you recover your hacked account and get back to feeling secure online. she a tech reach her (MARIECONSULTANCYOZ@GMAIL.COM and INSTAGRAM :MARIE_CONSULTANCY)
The entire Jacobin article only focused on if less addicts die it is a "working" policy, but it resulted in significantly more people now hooked on drugs in those decriminalized places than before. How is that a positive that more individuals are now subject to a destroyed life?
Also ignores the destruction of the community and the increased violence they face since the police can no longer arrest these individuals and they have become much more brazen. In my city where it certainly isn't decriminalized the police refused to arrest people camped out smoking meth in train stations to the point that many people refused to take transit. Numerous times I had to walk through a cloud of meth smoke to get out of the station because there was zero repercussions for open drug use. Open fencing, people getting robbed, stepping on needles, and random attacks.
By almost all metrics hard drug decriminalization is a failure.
Accessing the walkway to the front door and knocking on the front door is not trespassing, even if there is signs posted. Wandering off the path, going around the back, looking in windows is trespassing - but accessing the front door to deliver, ask for directions, or invite someone to tea is not. You can be asked to leave which then could be turned into trespassing. Decent write up about it - https://www.radford.edu/content/cj-bulletin/home/june--2017-...
> You can be asked to leave which then could be turned into trespassing.
The OP definitely asked for the deliveries to stop, and even had a confrontation with the delivery driver. Seems pretty clear that he's expressed his preference that they not enter his property.
Still not free-riding when you consider the data they are collecting from viewers even with ad-blockers. People still have accounts to save channels/videos, lots of people or households have Android phones that makes it stupidly easy to link to people, places and purchases. There is significantly more value they still gain from it even if youtube itself operates at a loss.
Maybe if they weren't allowed to collect so much information, or had to pay back the users they are collecting data on could I see the point that ad-blocking is free-riding.
You are only looking at one small part. Total immigration affecting housing & jobs are regular immigrants + temporary foreign workers + students (who often also work and have had most work restrictions removed). Each group also has their own path to citizenship. Our government is unfortunately targeting >500,000 per year even though housing cannot catch up. When people have kids they take almost 20 years to need their own space, direct adult immigration like this takes a housing unit away from an already tight market.
Everything about this sounds terrible for mobile, lossy or even medium latency connections. Non-blocking background updates are unnoticeable, but imagine your website being jammed like a stuck video every time you click or scroll?
He describes a "please wait buffering" future of web software. No thanks.
> Do we really believe that every one of our users is going to have a device capable of digesting 100 kB of JSON and rendering a complicated HTML table faster than a server-side app could on even a mid-grade server?
Yes. This really isn't hard. Pretty much any smartphone can do this with blissful ease. And it scales horizontally because your server can just worry about getting stuff from the database and handing it out.
Exactly. Our app renders highly detailed 3D anatomy on the canvas. The largest model is a 1mb json file with 57mb(!!!) Of assets. We've never had a problem with mobile devices, they're shockingly capable.
Turbo[links], Stimulus, etc. ARE non-blocking background updates. It's HTML over the wire instead of JSON. [1] I don't think this is what you think it is.
The idea is that HTML is just slightly heavier but the advantage is you can render it on the server and don't need all that middleware and serialization layer.
With this approach you can keep all of the traffic inside a WebWorker, so nothing blocks.
You also get the benefit of not having to open a new connection for every request and automatic reconnects when someting goes wrong. The connection is most likely already established before you even need it.
Yes, it's a long-lived TCP connection. It's not mandatory but the whole point of the protocol is to make one connection and send/receive all your assets over it instead of making more connections (which take time to setup and slow down page rendering).
It keeps it open as long as it is used and usually a bit past that. So if you have any traffic or are making periodic requests then it won't close the connection. If you are using SSE that can also use the same shared connection for streaming realtime updates to the client.
Of these things, I've only played with LiveView (I'm a web dabbler, not a pro, but an Erlang/Elixir fan so it was interesting to me). At least through the material I've worked through, it fails to a usable mode when JS/WS themselves fail on the client side. In theory, the only required JS is the JS needed to establish the WS connection and communicate with the server and issue the updates. But if the WebSocket fails, you should end up with just a full page load instead of the diff being applied. (NB: A particular site/application may have more JavaScript, but for what LiveView offers out-of-the-box there's not much JavaScript that's included or needed.)
I had this thought as well, but having used Liveview in Phoenix this really isn't the experience. Since the updates are really small and as long as you do it only when you want updates from the backend anyway the experience is just as smooth as any SPA even on a shitty mobile connection.
It's horrible for environments with split horizon DNS. It presumes that the only network that should exist are home users consuming public Internet cloud services.
For privacy it's a discussion of do I trust my obnoxious non-US ISP or a US based .com with seeing all my browsing habits based on DNS queries. At least I could have legal recourse with my ISP in my own country, and there is slightly better privacy laws.
It was a fine simple solution until DoH. In some internal environments the internal traffic volume can be much higher than the few services that might be publicly exposed.
Sure lots of ways you could do it - get a fat edge firewall to hairpin the traffic + support Internet access but you end up paying a lot more for all the threat licenses on the oversized edge. Could add many more tiers, maybe more translations or overlays... but why bother with a lot more complexity or especially more cost just because someone saw a threat in another country and are trying to solve a problem that does not apply to most.
Further more there can be internal only host names that are now getting probed and exposed externally. Exfiltration to a US company in the name of "security"
Hiding a DNS entry doesn't do anything to hide the machine. How long do you think a newly exposed IP address, without a DNS entry, will last before it is probed?
I hate hearing this awful take, as if every IT organization has the same neat and tidy systems deployed as they do. Never had to deal with 3rd party SaaS vendors certificate pinning requiring service tickets to change, don't have any hardware devices or appliance based software images each with their own web interface to update certs...
Yes companies should have a plan to do their minimum yearly certificate rotates. Yes those companies should have a security plan to rotate affected certificate issues, but in those cases the business users are ok with an outage to remediate a real security issue.
But what happened here is that Digicert invalided the entire domain's worth of certs. All those service.companyname.com certs or duplicates under that domain validation were affected in bulk. In some companies there could be thousands of certs under that domain. Digicert screwed up their system implementation and made their customers suffer.
"It's really disheartening that publicly trusted CAs just ignore their contractual obligations however they see fit."
It's also disheartening to see browsers in the CA consortium ignore the CA resolutions as well. Like how everyone voted for 2 year certs and Apple did their own thing anyways. Any punishment for Apple come? So why pick on the others?