Filippo Valsorda, from the Go security team, created mkcert some time ago → https://github.com/FiloSottile/mkcert which makes this process much easier. I setup a home laboratory for several HTTPS websites and it all works as expected without too much hassle.
If you have a dedicated piece of hardware like Raspberry Pi and you are going to be the only person ever using it, for private lab purposes, I don't see any reason to complicate it further with Yubikeys or hardware RNGs.
But why? Do you have any specific knowledge to the effect that a better hardware RNG would somehow benefit your private lab?
As it is Linux already has completely acceptable source of entropy. The most significant drawbacks are that it could potentially be observed by somebody else with access to the machine and that it provides only very limited amount of entropy. Both are not a problem when you use private RPi to generate a dozen keys for your home lab.
I think this is great read. The point of this is learning - as you probably would have seen in the article yourself. I don't know what's the point of questioning his design - if you don't like you always have option change it and/or implement it as you think is best. IMHO Carl did some really great work here.
I use cert-manager for internal mTLS and am satisfied with it. I found it surprisingly easy to use compared to things that try to be automatic, like linkerd and istio. (Plus, I can verify the authenticity of the connection at the application layer, and make appropriate decisions rather than having to trust "oh yeah, the sidecar has it under control, nothing to see here".)
It's a downgrade in security compared to pressing a physical button a YubiKey every so often... but I am OK with things being automatically renewed. (It basically means, if you own the apiserver, you own the network, which is a tradeoff I can live with. Compare this to "if you can intercept any packet, you own the network", which is what non-mTLS is.)
We use it in prod for secret storage and management (and maybe someday for CA) and I use at home for my own ca and btw it's surprisingly simple and effective.
I haven't used it seriously for that purpose so I can't comment :). Just my home setup. Their other products have their rough edges though so I can understand.
Although I do not deny the necessity for certificate authorities for convenience, I do not understand why using CAs is the only option. Why does the TLS protocol not allow for using a key pair which is agreed-upon between a server and a client beforehand, like in SSH connections where a public key to be used in a connection is placed in a server prior to the connection?
There are many CAs out there and in the event of China or Russia hacking into one of them, it would enable them to perform man-in-the-middle attacks. I'd like to eliminate such a possibility, but the TLS protocol requires me to trust a certificate authority. I might just be a conspiracy theorist, but I am suspecting why it's impossible to use TLS without trusting a third-party called certifcate authority is exactly because someone needed to leave a way to do MITM attacks.
I don't think TLS cares why the certificate is trusted, validating certificates is higher on the stack, and you've probably seen the browser UX for accepting an invalid certificate. I know I've added a non-CA cert to my trusted list (on Windows) and gotten the result you are describing.
Firefox is the only browser that ships with a built-in trust store. Other browsers use your operating system’s trust store and you can add & remove trusted CA certs from that trust store using OS-specific utilities.
The big three root store programs are run by Apple, Microsoft, and Mozilla/NSS. Most (all?) Linux distros are based on the Mozilla store. Until recently, Google also used Mozilla’s store for things like ChromeOS and Android. They just recently announced that they’re gonna start running their own program though.
FWIW the browsers seem to do a pretty good job of policing CAs. Probably better than most end-users would do.
Browsers do a crappy job in general for any CA usecase that isn't https on public websites. Guidelines for the WebPKI are, of course, very web-centric. E.g. short lifetimes may be acceptable there. Automated frequent reissuance may be an option. But if you e.g. look at email and auth certs, possibly stored on a smartcard, things are quite different. Lifetime should be long and can be long, because the key is used rarely (compared to webservers) and maybe stored in dedicated smartcards. Verification is another thing thats totally different for an email address or a person's name.
I’m on iOS and don’t have an Android or ChromeOS device handy. I don’t see a way to remove a cert from Apple’s iOS trust store (settings just tells me what store version I’m running). There may be a way to do it using mobile device management (MDM) tools.
On Android (Galaxy Note9 in my case), you can disable any CA certificate from being used. There's a "view security certificates" screen with the ability to disable each individually.
It would be nice to use a hybrid: (only) Trust CA on first use. But I guess in practice some random company is much more likely to misplace their keys than you are to be mitmed by CA's.
Also, the Web PKI model has no real granular authorization when it comes to which CA can issue for which domain. A trusted CA can issue for any domain. So if you TOFU in my CA to connect to my website you’re also allowing me to issue for google.com.
Obviously this is all addressable in theory, but now you’d need some kinda policy system baked in pretty much everywhere.
Your website hands me a cert. I have never seen it before so I make sure CA says it's legit. From then on I keep using that same cert to connect to you, and CA no longer matters.
I haven't checked/verified recently but from your comment I'm guessing that the major browsers still don't support (i.e., enforce) the Name Constraints extension?
There are CAA records in DNS, but those are far too weak. The CAs are supposed to check them at issue-time. To be useful, the clients would have to check them at acceptance-time.
That wouldn't quite work the way you think it would...
The CAA record is useful only at the time a certificate is issued (signed) by a CA.
A client has no way to know what the CAA record was at the time the certificate was issued -- a browser cannot ("at acceptance-time") use the current value of the CAA record to determine whether a certificate was properly issued or not.
You don't have to use a third party, you just have to specify an authority.
If you say the self-signed certificate is the authority, that's no problem.
In my private lab environment I have one certificate (for everything).
The certificate is installed on all servers, and even the clients use the same certificate to authenticate, and everyone trusts the cert as an authority.
You'd never run a production setup like this, but there is nothing stopping you.
Because the SSH trust model relies on verifying the fingerprint out of band, which isn't practical with a website, and even if it was non-technical users and even most technical users wouldn't do it. Certificate Transparency is a good mitigation for the risk of a CA being compromised
TLS-SRP allows using an agreed to password alone or in conjunction with a certificate. Gnutls, openssl, curl and a few other libraries contain implementations. It just has not found widespread use and none of the major browsers support it.
TLS does have a mode that uses a pre-shared key, but in TLSv1.3 I believe it's only used for session resumption.
Edit: Also, I recently learned about DNS Certification Authority Authorization (CAA) records. You can specify which of the public certificate authorities a browser is allowed to respect, for your domain. I don't think it's verified by all browsers yet, but it's a step.
Other people responded to your CAA mistake (CAA absolutely shouldn't be enforced by anybody else except a CA, even researchers monitoring it for other sites is dubious, though as a research project it isn't inherently dangerous like enforcement would be)
But let's talk about PSKs (pre-shared keys).
TLS 1.3 itself doesn't care why this PSK is used. It's true that today your Web Browser will only use it for resumption, because it offers a significant speed-up in some scenarios on second and subsequent visits.
But for IoT applications it is envisioned that a group of devices might share one or more PSKs out of band. Maybe they're a factory setup thing, maybe the devices are to be set up in close physical proximity using low bandwidth Bluetooth, then they'll build themselves a WiFi network when deployed using PSKs to secure TLS.
Browsers could do that, but all the vendors are clear there's no way they'd actually want to do that. What would the UX look like? "Please enter the hexadecimal PSK for the site in this box?". So today they only use PSKs for resumption.
The reason this one feature (PSKs) has two very different purposes is to narrow the security proof work. Mathematicians worked really hard to prove some important things about TLS 1.3 and the more extra different features it has, the less focus can be put on any particular feature.
Even as it is they missed the fact that PSKs are symmetric. If Alice and Bob have a single PSK to "authenticate" them and are both capable of serving, then Mallory can trick Alice into thinking she's talking to Bob when she's actually talking to herself. It's a small problem, but the proofs didn't cover it, and so it was not spelled out in the TLS 1.3 document that you should worry about this.
Pretty sure CAA is supposed to be enforced by CAs, not by browsers. So, for instance, Let's Encrypt should refuse to issue a cert for your domain if you have CAA setup for digicert.
CAA records (which specify which CAs are allowed to issue certificates for a given domain) are intended to be enforced by CAs, not by browsers. This prevents unintentional misissuance of a certificate, but not deliberate MITM attempts if the CA is actively involved.
Their counterpart for browsers are TLSA records, which associate specific keys or certificates with a domain name. This is the part that actually prevents MITM attacks on the client side (assuming the client's getting a complete and accurate DNS response, which is a whole other issue), since it'll cause a compliant client to reject any other keys or certificates. (No idea how widespread the implementation of this is on the client side, though.)
I’ll also add that certificate transparency (CT) is another mechanism designed to mitigate malicious cert issuance by a CA. A CT log is an public, append-only data structure. It doesn’t actively prevent anything, but it does ensure that a malicious issuance is easily detectable. In practice it seems to be a pretty effective deterrent against nation-state attacks: they won’t go undetected for long.
In one sense this is actually what you want. The whole point of ACME is to facilitate proof of control. For the Web PKI this is mandatory, certificate automation without proof of control wouldn't really be deployable. In a local environment you likely don't need that, and may even not have a way to do it even if you wanted to. So ACME's secret sauce is irrelevant. Any of the older protocols that just do certificate issuance with no proof of control, like SCEP, would be fine.
But because ACME has been so successful (good!) lots of newer or updated software incorporates ACME support out of the box, and so it is attractive to be able to interface to that support.
We could have had a situation where most server software did SCEP and you needed an adaptor to reflect the SCEP requests into a system that would use ACME to get them a certificate if they face the public Internet. But that isn't what we got, so, fine, this works, I'll take it.
For anyone needing a refresher on PKI, CA, etc, the article references an earlier blog post that is very well-written and informative: https://smallstep.com/blog/everything-pki/
This is technically neat, but I really wish that browsers would just have a config option that would allow me to turn on trust on first use for certs. Also, all other software that use TLS.
It works fine for mailservers. No on expects a mailserver cert to be from an authority, you can self sign and it works fine.
I think TOFU would be really problematic for browsers & the Web PKI trust model. At this point Web PKI is dealing with attacks from nation-states and other advanced threats that end-users aren't really in a position to handle themselves.
Like, just last week the browsers had to remove a certificate authority from their root cert programs because Kazakhstan was issuing certificates to MiTM traffic. A TOFU model would make it a lot harder to detect and remediate this sort of attack and lots of other relevant attack vectors.
We'd also need to re-solve a bunch of adjacent problems like revocation, renewal/rotation, and transparency, which would probably mean re-introducing the sorts of centralized architectural components and processes that I'm assuming you're trying to eliminate with TOFU.
All those points can be summed up as: The point of web PKI is that the decision of who to trust and who not is not supposed to rest at the end user but at some central authority.
Then however, we get to the political question who exactly that central authority should be and why.
> Like, just last week the browsers had to remove a certificate authority from their root cert programs because Kazakhstan was issuing certificates to MiTM traffic.
I may have misunderstood the incident, but wasn't it such that the CA was not even one of the built-ins, but a "custom" root CA that all users were required to install on their systems? As such, the block was more equivalent to block a specific to TOFU key.
Of course, blocking the MITM CA won't magically turn off the ISP's MITM proxy. It will simply make it so that kazhakh citizens can't access any web sites at all until the government hopefully caves and turns off the proxy.
Yea you’re correct. They were forcing people to add a CA. So this was not a great example.
I wouldn’t say the centralization of Web PKI is by design so much as it is (was?) by necessity. There’s a crypto conjecture called Zooko’s Triangle that says there are three desirable properties for a naming system: human-meaningful, secure, and decentralized. Zooko’s conjecture is that you can only have two. Web PKI picks secure & human-meaningful. Simple PKI (like TOFU) picks secure & decentralized (the names aren’t actually human-meaningful since you’re really trusting a public key which is a big random number, not a domain name). DNS picks human-meaningful and decentralized.
More recently, Aaron Schwartz realized you can “square the triangle” using blockchain. So it appears to be technically possible to have all three now, but there are other hurdles. In any case, simple public keys aren’t a silver bullet. Just a different set of compromises.
Counterpoint: this is also a big problem with SSH. When a host key changes with SSH you'll get a "Host Key Verification Failure" that basically says "yo, the key I expected for this host wasn't actually presented, you're probably being mitm'd". Then you have to go into your `~/.ssh/known_hosts` and delete a line. Then you get a new TOFU warning that you'll just type "yes" to and proceed.
So most SSH users (who, on average, are way more technically competent than browser users) will automatically 1) panic and think they're being hacked, or 2) blindly trust the new key when a key change occurs with SSH. Both of these responses are dangerous. The result is that, unless you have fancy stuff for managing known hosts for all of your users on all of your endpoints, you're probably just avoiding this scenario by not rotating host keys at all for SSH. Which is also problematic.
By using a CA you're delegating the key binding to a trusted piece of infrastructure that can be locked down and monitored by experts. You can't do that easily with key-bindings written to files on a bunch of different endpoints. With a CA, end users shouldn't need to care about key changes. The fact that the CA can issue a new certificate for some entity is a benefit: it makes credential rotation easier. If you do want to know when credentials rotate there are ways to monitor that yourself (and Web PKI has ways like key pinning and cert transparency).
"...that can be locked down and monitored by experts."
In my homelab? I guess I should also kerberize my NFS mounts, after I solve the SPOF problem for kerberos's dependent pieces. I might have a little time after doing all this for working on my homelab projects. How much time could this configuration, maintenance and monitoring possibly take? (Just between you and me I resolve that on January 1st I will again start reading all daily/weekly/monthly logs, and this year I WILL NOT FAIL. It's only dozen give or take few boxen.)
If the security infrastructure needs to designed, configured, monitored, and maintained by "experts" in an unfunded environment, the security infrastructure is doomed to fail. IOW, it's security theatre.
> No on expects a mailserver cert to be from an authority, you can self sign and it works fine.
Yes, but that's A Bad Thing(TM) -- it means that MITM'ing a mail server also "works fine"!
Now, personally, I'm not a big fan of DNSSEC (especially considering how widespread 512- and 1024-bit keys were!) but I was hopeful that "DNS-based Authentication of Named Entities" (DANE) [0] would become widely supported once it was fully standardized. Unfortunately, that didn't happen -- especially in the browsers!
After the explosion in growth of "HTTPS everywhere", transport security for e-mail was the next big (unencrypted) problem that needed to be addressed in my opinion. Luckily, both Exim and Postfix (my MTA of choice) gained support for DANE so while progress was technically made, widespread adoption never really happened there either (mostly due to the dependency on DNSSEC, AFAICT).
In the meantime, though, "SMTP MTA Strict Transport Security (MTA-STS)" appeared on the scene and has since been formalized as RFC8461 [1]. In many ways, it's technically superior to DANE and, importantly, does not rely on DNSSEC. It does have the same reliance on the "WebPKI" as the browsers so it's certainly not perfect; it's still a huge improvement, though, and it's certainly better than opportunistic encryption which is trivial to MITM.
Anyways, as MTA-STS was designed by folks at Google, Microsoft, Comcast, and Oath, I'm hopeful, once again, that we'll eventually get to the point where (at least) the overwhelming majority of e-mail is encrypted in transit as it passes from MTA to MTA on its way to the recipient's mailbox.
One of the few good things about so much of the e-mail nowadays being handled by Google and Microsoft is the huge volume of mail that will suddenly just start being encrypted in transit, overnight, when just those two enable MTA-STS in their mail infrastructure.
I haven't kept up with progress on MTA-STS since leaving my previous job (which included responsibility for the mail infrastructure) about two years ago -- shortly after it became a standard -- but I'd love to find out that it's already been widely rolled out by the big players in the meantime! I suppose it's about time to catch up on the last two year's worth of messages sitting unread in the "mailops" folder in my mailbox!
DNSSEC doesn't provide anything valuable in terms of privacy. All it does it make it so central governments can more easily censor their controlled tld. If DNSSEC had been rolled out in Libya back in 2010 I doubt the Libyan Revolution would've happened.
TLA cert authorities are bad enough. MTA-STS is would make running your own mailsever harder and leash you to one of a dozen cert authorities. If you don't think that's a problem then remember what happened to dot org, and think what will happen to these cert authorities given enough money and time.
I've run my own mailserver for a decade now and I am strongly against everything you suggest. But it doesn't matter what I care about. If everyone uses only one of a handful of email providers then they'll just slowly close off their walled gardens anyway. It's already happening no matter how many superfluous cargo-cult standards independent mailserver operators implement.
Nice! In the past, I've tried to use Let's Encrypt with ESXi, but didn't see a clear and simple way with Let's Encrypt (and that itself might be overkill)... as I just needed it to admin my own servers, I decided to use https://github.com/FiloSottile/mkcert for that. I posted what I did in https://henvic.dev/posts/homelab/
They won't validate my internal domains (obviously). I have all my infra on .lan and using this they all get ACME certs and I never have to see another "insecure connection" page.
Also had my old workplace on .dev until those bastards at Google stole it and added the entire tld to the hsts preload list!!
> Also had my old workplace on .dev until those bastards at Google stole it and added the entire tld to the hsts preload list!!
They didn't steal it. You'd hijacked it, and your hijacking failed. Go big or go home. The IETF hijacked the OID arc 1.3.6.1 and they succeeded because everybody accepted their control of that arc and it's now used everywhere, but if you hijack some namespace and then only use it on a few dozen machines nobody has heard of, that's not going to stick.
More seriously, what you've done is probably a bad idea. https://myprinter.lan/ seems unique to you, and then your new partner moves in, why doesn't the printer work? Oh right, his printer is also named myprinter.lan because you don't have globally unique namespaces.
This happens on a bigger scale at a business or other organisations of course, but it's annoying even in one household. Here's a metaphorical nickel kid, get yourself a domain in the public DNS hierarchy.
It's completely acceptable to use .local. in such a manner, however.
The "conflict resolution" process is outlined in the RFC [0] and is, well, pretty simple:
> ... the computer (or its human user) MUST cease using the name, and SHOULD attempt to allocate a new unique name for use on that link.
You can even set up your own DNS servers to be authoritative for the ".local." domain (zone), if you really want to.
RFC6762 states that "any DNS query for a name ending with '.local.' MUST be sent to" 224.0.0.251 (or ff02::fb) -- but it also explicitly allows sending them to your regular ol' (unicast) DNS servers, too. It's up to you to figure out how to manage that, of course.
Now, that said... to avoid any potential issues, I'd only ever use .local for its intended purpose. There's just too much potential for "weirdness" to occur. Personally, however, I completely avoid any use of either (.local and Multicast DNS) regardless.
--
On a side note, ".localdomain" mentioned in the grandparent comment should actually be "localhost."
Being the admin of my network, I control these things. I don't have a partner adding random devices without oversight.
I have plenty of public domains. .lan is short and easy, hence my preference for it.
Ideally there would be one or two private tlds codified just as there are private ip ranges (my hypothetical partner could also add random devices with conflicting IP, businesses have problems with conflicting IP/subnets often, these are just problems that need to be solved through proper organisation, so I fail to see why dns is somehow different).
> Ideally there would be one or two private tlds codified just as there are private ip ranges ...
There are several, in fact.
RFC8375 [0] states:
> This document registers the domain 'home.arpa.' as a special-use domain name [RFC6761] [1] ... 'home.arpa.' is intended to be the correct domain for uses like the one described for '.home' in [RFC7788] [2]: local name service in residential homenets.
In addition to "home.arpa.", there are several other domain names listed in IANA's "Special-Use Domain Names" registry [3] that "users are free to use ... as they would any other domain names" -- even if they are technically intended/reserved for other uses.
For as long as I can remember, I've used a subdomain of one of my registered domain names for everything in my home network. That has the advantage of, if and/or when desired, allowing me to do some "fancy tricks" (involving some combintion of DNS, VPN, and/or reverse proxying) to make specific internal/private resources available from the Internet.
I use int.company.com for my internal domains. company.com is a real domain that I registered. If you did similar, as opposed to making up your own domain, you wouldn't have a problem.
You still need a public-facing domain to do this, though. You can't use Let's Encrypt on a my.lan domain name, because there's no way to create the public records required to validate it.
My go-to way is having a public-facing domain with Let's Encrypt certs and the public-facing domain just CNAMEs to my internal domains. Public-facing domains are not luckily not that expensive and I didn't even go for the cheapest option (mine's about 10€/year).
I was looking into something like for my homelab but for a cert noob I got lost somewhere between trying to use intra.mydomain.com and not screwing up my public address
Can you recommend a good book or blog series that covers this topic in depth?
Those are simply the rules. You can do ACME with an HTTP challenge or a DNS challenge. The HTTP challenge is adequate for proving that you control x.example.com, but serving a website on x.example.com doesn't prove that you own y.example.com. But, being able to create example.com DNS records does, so that is what's required to get a wildcard certificate.
I imagine you are confused because the proposal above sounds like "just get *.example.com, then copy that cert to everything that will ever serve traffic for example.com", which doesn't sound like a great idea to me.
I've been trying to figure out how to set up a local CA on my (linux) dev machine for a while now. Every time I get started I seem to run into problems. Does anyone have a simple guide for people who don't plan on investing in a raspberry pi?
A year ago I would have found this appealing in a #homelab context, but now I WireGuard/Tailscale each host and bind LAN-only services to tailscale0 interface.
It's theoretically the same idea at the node level instead of the application level except that the WireGuard curve25519 keys now cannot be verified since they are published by a 3rd party that you have zero control on. This 3rd party can simply connect to your machines anytime by injecting its public keys into your nodes and have complete access into your private network. That's the power of owing your own CA as opposed to letting others injecting peer public keys as if there is nothing to verify.
I feel woefully out of step here. If you can create a CA that browsers respect, have you really improved on cleartext transmission? And isn't the CA is now a new attack vector?
You're definitely shifting your attack surface towards the CA, but in doing so you've (probably) given yourself a lot of additional control over the weakest link in your system. With cleartext, vulnerabilities tend to range from simple things like "find the switch, plug in, sniff traffic over the wire" to slightly more complex, but still viable things like "sniff wifi traffic now, worry about cracking WPA-2 later".
Your typical TLS certificate is going to be more robust than your typical consumer / prosumer LAN security, and you typically have better tools (SSH certificates, 2FA, encryption at rest, etc, etc) for locking down a machine running a CA than network gear you either don't control or bought from a consumer-focused vendor.
Its a typical example of defense-in-depth. Hopefully your network is trustworthy, but if for whatever reason it becomes insecure/vulnerable you now have genuine encryption between client and server at application layer.
I may be wrong, but I think you still have to manually install your CA cert as a custom root CA on any device that you want to use inside your network. You can't use this tutorial to make a new CA that is automatically trusted.