Hacker News new | past | comments | ask | show | jobs | submit login

Yep. I once inherited a system where the previous team had used GCSFuse to back the `/etc/letsencrypt` directory on a cluster of nginx webservers. It "worked" and may have been a reasonable approach at the time they built it, avoiding setting up a single "master" to handle HTTP-01 challenges (and it was before GCP's HTTPS LB could handle more than a handful of domains/certificates). The problem was that as the number of domains/certificates it handled increased, nginx startup or config reload time got slower and slower as it insists on stat-ing and reading every single file in that directory in the process. It got high enough that it started running into request throttling on the storage bucket. It's no fun when `nginx -s reload` takes two minutes and sometimes fails completely.



The most wrong part of that previous team is to store private keys unencrypted in the cloud, not the performance part.


I mean... literally every VM running nginx or apache that I've ever seen has had the SSL certs just sitting on the filesystem in /etc/ssl or /etc/letsencrypt or similar... All of letsencrypt's documentation points people in that direction.


My understanding is that everything is encrypted by default in GCP. Though you need to manually configure encryption keys if you want to prevent Google ever having access to your data.


This I don't understand. Even if you configure KMS, those are still keys stored on Google infra.


You can use your own KMS outside the Google infrastructure. https://cloud.google.com/storage/docs/encryption/customer-su...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: